Simple Methods for Ensemble Learning
As defined earlier in the chapter, ensemble learning is all about combining the strengths of individual models to get a superior model. In this section, we will explore some simple techniques such as the following:
- Averaging
- Weighted averaging
- Max voting
Let's take a look at each of them in turn.
Averaging
Averaging is a naïve way of doing ensemble learning; however, it is extremely useful too. The basic idea behind this technique is to take the predictions of multiple individual models and then average the predictions to generate a final prediction. The assumption is that by averaging the predictions of different individual learners, we eliminate the errors made by individual learners, thereby generating a model superior to the base model. One prerequisite to make averaging work is to have the predictions of the base models be uncorrelated. This would mean that the individual models should not make the same...