Chapter 4. Random Forests
The previous chapter introduced bagging as an ensembling technique based on homogeneous base learners, with the decision tree serving as a base learner. A slight shortcoming of the bagging method is that the bootstrap trees are correlated. Consequently, although the variance of predictions is reduced, the bias will persist. Breiman proposed randomly sampling the covariates and independent variables at each split, and this method then went on to help in decorrelating the bootstrap trees.
In the first section of this chapter, the random forest algorithm is introduced and illustrated. The notion of variable importance is crucial to decision trees and all of their variants, and a section is devoted to clearly illustrating this concept. Do the random forests perform better than bagging? An answer will be provided in the following section.
Breiman laid out the importance of proximity plots in the context of random forests, and we will delve into this soon enough...