In this chapter, we explored the gradient boosting algorithm, which is used to build ensembles in a sequential manner, adding a shallow decision tree that only uses a very small number of features to improve on the predictions that have been made. We saw how gradient boosting trees can be very flexibly applied to a broad range of loss functions and offer many opportunities to tune the model to a given dataset and learning task.
Recent implementations have greatly facilitated the use of gradient boosting by accelerating the training process and offering more consistent and detailed insights into the importance of features and the drivers of individual predictions. In the next chapter, we will turn to Bayesian approaches to ML.