The two most common meta-learning methods are bagging and boosting. Stacking is less widely used; yet it is powerful because one can combine models of different types. All three methods create a stronger estimator from a set of not-so-strong estimators. We tried the stacking procedure in Chapter 9, Tree Algorithms and Ensembles. Here, we try it with a neural network mixed with other models.
The process for stacking is as follows:
- Split the dataset into training and testing sets.
- Split the training set into two sets.
- Train base learners on the first part of the training set.
- Make predictions using the base learners on the second part of the training set. Store these prediction vectors.
- Take the stored prediction vectors as inputs and the target variable as output. Train a higher level learner (note that we are still on the second part of the training...