Chapter 6. Boosting Refinements
In the previous chapter, we learned about the boosting algorithm. We looked at the algorithm in its structural form, illustrated with a numerical example, and then applied the algorithm to regression and classification problems. In this brief chapter, we will cover some theoretical aspects of the boosting algorithm and its underpinnings. The boosting theory is also important here.
In this chapter, we will also look at why the boosting algorithm works from a few different perspectives. Different classes of problems require different types of loss functions in order for the boosting techniques to be effective. In the next section, we will explore the different kinds of loss functions that we can choose from. The extreme gradient boosting method is outlined in the section dedicated to working with the xgboost
package. Furthermore, the h2o
package will ultimately be discussed in the final section, and this might be useful for other ensemble methods too...