I was looking up differences between boosting an bagging and I see this quoted everywhere
If the classifier is unstable (high variance), then we should apply Bagging. If the classifier is stable and simple (high bias) then we should apply Boosting.
Breiman [1996a] showed that Bagging is effective on ``unstable'' learning algorithms where small changes in the training set result in large changes in predictions. Breiman [1996a] claimed that neural networks and decision trees are examples of unstable learning algorithms.check last lines
Aren't gbdt's the most preferred/recommended learners in xgboost ? Which is contradictory considering how decision trees are apparently unstable learners.