Summary
In conclusion, this chapter looked at the two most common methods of ensemble learning for decision trees: bagging and boosting. We looked at the Random Forests and ExtraTrees algorithms, which build decision tree ensembles using bagging.
This chapter also gave a detailed overview of boosting in decision trees by going through the GBDT algorithm step by step, illustrating how gradient boosting is applied. We covered practical examples of random forests, ExtraTrees, and GBDTs for scikit-learn.
Finally, we looked at how dropouts can be applied to GBDTs with the DART algorithm. We now thoroughly understand decision tree ensemble techniques and are ready to dive deep into LightGBM.
The next chapter introduces the LightGBM library in detail, both the theoretical advancements made by the library and the practical application thereof. We will also look at using LightGBM with Python to solve ML problems.