Ensemble learning is a method for combining results produced by different learners into one format, with the aim of producing better classification results and regression results. In previous chapters, we discussed several classification methods. These methods take different approaches, but they all have the same goal, that is, finding an optimum classification model.
However, a single classifier may be imperfect, as it misclassify data in certain categories. As not all classifiers are imperfect, a better approach is to average the results by voting. In other words, if we average the prediction results of every classifier with the same input, we may create a superior model compared to using an individual method.
In ensemble learning, bagging, boosting, and random forest are the three most common methods:
- Bagging is a voting method, which first uses Bootstrap to generate...