Stacked generalization is an ensemble of a diverse group of models that introduces the concept of a meta-learner. A meta-learner is a second-level machine learning algorithm that learns from an optimal combination of base learners:
"Stacked generalization is a means of non-linearly combining generalizers to make a new generalizer, to try to optimally integrate what each of the original generalizers has to say about the learning set. The more each generalizer has to say (which isn't duplicated in what the other generalizers have to say), the better the resultant stacked generalization."
- Wolpert (1992), Stacked Generalization
The steps for stacking are as follows:
- Split your dataset into a training set and a testing set.
- Train several base learners on the training set.
- Apply the base learners on the testing set to make predictions...