Gradient boosting is another boosting algorithm. It is a more generalized boosting framework compared to AdaBoost, which also makes it more complicated and math-intensive. Instead of trying to emphasize problematic instances by assigning weights and resampling the dataset, gradient boosting builds each base learner on the previous learner's errors. Furthermore, gradient boosting uses decision trees of varying depths. In this section, we will present gradient boosting, without delving much into the math involved. Instead, we will present the basic concepts, as well as a custom Python implementation.
Gradient boosting
Creating the ensemble
The gradient boosting algorithm (for regression purposes) starts by calculating the...