Chapter 8. Learning with Ensembles
The motivation for creating machine learning ensembles comes from clear intuitions and is grounded in a rich theoretical history. Diversity, in many natural and human-made systems, makes them more resilient to perturbations. Similarly, we have seen that averaging results from a number of measurements can often result in a more stable models that are less susceptible to random fluctuations, such as outliers or errors in data collection.
In this chapter, we will divide this rather large and diverse space into the following topics:
- Ensemble types
- Bagging
- Random forests
- Boosting
Ensemble types
Ensemble techniques can be broadly divided into two types:
- Averaging method: This is the method in which several estimators are run independently and their predictions are averaged. This includes random forests and bagging methods.
- Boosting method: This is the method in which weak learners are built sequentially using weighted distributions of the data based on the...