When the trees in the forest are trees of depth 1 (also known as decision stumps) and we perform boosting instead of bagging, the resulting algorithm is called AdaBoost.
AdaBoost adjusts the dataset at each iteration by performing the following actions:
- Selecting a decision stump
- Increasing the weighting of cases that the decision stump labeled incorrectly while reducing the weighting of correctly labeled cases
This iterative weight adjustment causes each new classifier in the ensemble to prioritize training the incorrectly labeled cases. As a result, the model adjusts by targeting highly-weighted data points.
Eventually, the stumps are combined to form a final classifier.