AdaBoost
In the previous section, we have seen that sampling with a replacement leads to datasets where the samples are randomly reweighted. However, if M is very large, most of the samples will appear only once and, moreover, all the choices are totally random. AdaBoost is an algorithm proposed by Schapire and Freund that tries to maximize the efficiency of each weak learner by employing adaptive boosting (the name derives from this). In particular, the ensemble is grown sequentially and the data distribution is recomputed at each step so as to increase the weight of those samples that were misclassified and reduce the weight of the ones that were correctly classified. In this way, every new learner is forced to focus on those regions that were more problematic for the previous estimators. The reader can immediately understand that, contrary to random forests and other bagging methods, boosting doesn't rely on randomness to reduce the variance and improve the accuracy. Rather, it works...