Getting started – adaptive boosting
Like bagging, boosting is an ensemble learning algorithm that combines base learners (typically decision trees) into an ensemble. Boosting was initially developed for classification problems, but can also be used for regression, and has been called one of the most potent learning ideas introduced in the last 20 years (Hastie, Tibshirani, and Friedman 2009). Like bagging, it is a general method or metamethod that can be applied to many statistical learning methods.
The motivation behind boosting was to find a method that combines the outputs of many weak models, where "weak" means they perform only slightly better than a random guess, into a highly accurate, boosted joint prediction (Schapire and Freund 2012).
In general, boosting learns an additive hypothesis, HM, of a form similar to linear regression. However, each of the m= 1,..., M elements of the summation is a weak base learner, called ht, which itself requires...