Ensemble methods
Ensemble methods combine many models (often weak ones) to create a stronger one that will either minimize the average error between observed and predicted values (the bias) or improve how well it generalizes to unseen data (minimize the variance). We have to strike a balance between complex models that may increase variance, as they tend to overfit, and simple models that may have high bias, as these tend to underfit. This is called the bias-variance trade-off, which is illustrated in the following subplots:
Ensemble methods can be broken down into three categories: boosting, bagging, and stacking. Boosting trains many weak learners, which learn from each other's mistakes to reduce bias, making a stronger learner. Bagging, on the other hand, uses bootstrap aggregation to train many models on bootstrap samples of the data and aggregate the results together (using voting for classification...