In this chapter, we discussed Random Forests, an ensemble method utilizing decision trees as its base learners. We presented two basic methods of constructing the trees: the conventional Random Forests approach, where a subset of features is considered at each split, as well as Extra Trees, where the split points are chosen almost randomly. We discussed the basic characteristics of the ensemble method. Furthermore, we presented regression and classification examples using the scikit-learn implementations of Random Forests and Extra Trees. The key points of this chapter that summarize its contents are provided below.
Random Forests use bagging in order to create train sets for their base learners. At each node, each tree considers only a subset of the available features and computes the optimal feature/split point combination. The number of features to consider at each...