The general boosting algorithm
The tree-based ensembles in the previous chapters, Bagging and Random Forests, cover an important extension of the decision trees. However, while bagging provides greater stability by averaging multiple decision trees, the bias persists. This limitation motivated Breiman to sample the covariates at each split point to generate an ensemble of "independent" trees and lay the foundation for random forests. The trees in the random forests can be developed in parallel, as is the case with bagging. The idea of averaging over multiple trees is to ensure the balance between the bias and variance trade-off. Boosting is the third most important extension of the decision trees, and probably the most effective one. It is again based on ensembling homogeneous base learners (in this case, trees), as are the bagging and random forests. The design of the boosting algorithm is completely different though. It is a sequential ensemble method in that the residual/misclassified...