In this chapter, we focus on decision trees and ensemble algorithms. Decision algorithms are easy to interpret and visualize as they are outlines of the decision making process we are familiar with. Ensembles can be partially interpreted and visualized, but they have many parts (base estimators), so we cannot always read them easily.
The goal of ensemble learning is that several estimators can work better than a single one. There are two families of ensemble methods implemented in scikit-learn: averaging methods and boosting methods. Averaging methods (random forest, bagging, extra trees) reduce variance by averaging the predictions of several estimators. Boosting methods (gradient boost and AdaBoost) reduce bias by sequential building base estimators with the goal of reducing the bias of the whole ensemble.
A common characteristic of many ensemble constructions is...