Gradient boosting is a machine learning technique that works on the principle of boosting, where weak learners iteratively shift their focus toward error observations that were difficult to predict in previous iterations and create an ensemble of weak learners, typically decision trees.
Gradient boosting trains models in a sequential manner, and involves the following steps:
- Fitting a model to the data
- Fitting a model to the residuals
- Creating a new model
While the AdaBoost model identifies errors by using weights that have been assigned to the data points, gradient boosting does the same by calculating the gradients in the loss function. The loss function is a measure of how a model is able to fit the data on which it is trained and generally depends on the type of problem being solved....