"This is how you win ML competitions: you take other people's work and ensemble them together."
- Vitaly Kuznetsov, NIPS2014
You may have already realized that we've discussed ensemble learning. It's defined on www.scholarpedia.org as the process by which multiple models, such as classifiers or experts, are strategically generated and combined to solve a particular computational intelligence problem. In random forest and gradient boosting, we combined the votes of hundreds or thousands of trees to make a prediction. Hence, by definition, those models are ensembles. This methodology can be extended to any learner to create ensembles, which some refer to as meta-ensembles or meta-learners. We'll look at one of these methods referred to as stacking. In this methodology, we'll produce a number of classifiers...