In this chapter, we presented an ensemble learning method called stacking (or stacked generalization). It can be seen as a more advanced method of voting. We first presented the basic concept of stacking, how to properly create the metadata, and how to decide on the ensemble's composition. We presented one regression and one classification implementation for stacking. Finally, we presented an implementation of an ensemble class (implemented similarly to scikit-learn classes), which makes it easier to use multi-level stacking ensembles. The following are some key points to remember from this chapter.
Stacking can consist of many levels. Each level generates metadata for the next. You should create each level's metadata by splitting the train set into K folds and iteratively train on K-1 folds, while creating metadata for the Kth fold. After creating the metadata...