Gradient Boosting Machines and XGBoost
Gradient Boosting Machines (GBM) is an ensembling algorithm. The main idea behind GBM is to take some base model and then fit this model, over and over, to the data, gradually improving the performance. It is different from Random Forest models because GBM tries to improve the results at each step, while random forest builds multiple independent models and takes their average.
The main idea behind GBM can be best illustrated with a Linear Regression example. To fit several linear regressions to data, we can do the following:
- Fit the base model to the original data.
- Take the difference between the target value and the prediction of the first model (we call it the residuals of Step 1) and use this for training the second model.
- Take the difference between the residuals of step 1 and predictions of step 2 (this is the residuals of Step 2) and fit the 3rd model.
- Continue until you train N models.
- For predicting, sum up the predictions of all individual models...