Ensemble learning as model selection
This is not a proper ensemble learning technique, but it is sometimes known as bucketing. In the previous section, we have discussed how a few strong learners with different peculiarities can be employed to make up a committee.
However, in many cases, a single learner is enough to achieve a good bias-variance trade-off, but it's not so easy to choose among the whole machine learning algorithm population. For this reason, when a family of similar problems must be solved (they can differ but it's better to consider scenarios that can be easily compared), it's possible to create an ensemble containing several models and use cross-validation to find the one whose performances are the best. At the end of the process, a single learner will be used, but its choice can be considered like a grid search with a voting system.
Sometimes, this technique can unveil important differences even using similar datasets. For example...