In this section, we will write a stacking aggregator with scikit-learn. A stacking aggregator mixes models of potentially very different types. Many of the ensemble algorithms we have seen mix models of the same type, usually decision trees.
The fundamental process in the stacking aggregator is that we use the predictions of several machine learning algorithms as input for the training of another machine learning algorithm.
In more detail, we train two or more machine learning algorithms using a pair of X and y sets (X_1, y_1). Then we make predictions on a second X set (X_stack), y_pred_1, y_pred_2, and so on.
These predictions, y_pred_1 and y_pred_2, become inputs to a machine learning algorithm with the training output y_stack. Finally, the error can be measured on a third input set, X_3, and a ground truth set, y_3.
It will be...