In an AdaBoost ensemble, the mistakes made in each iteration are used to alter the weights of the training samples for the following iterations. As in the boosting meta-estimator, this method can also use any other estimators instead of the decision trees used by default. Here, we have used it with its default estimators on the Automobile dataset:
from sklearn.ensemble import AdaBoostRegressor
rgr = AdaBoostRegressor(n_estimators=100)
rgr.fit(x_train, y_train)
y_test_pred = rgr.predict(x_test)
The AdaBoost meta-estimator also has a staged_predict method, which allows us to plot the improvement in the training or test loss after each iteration. Here is the code for plotting the test error:
pd.DataFrame(
[
(n, mean_squared_error(y_test, y_pred_staged))
for n, y_pred_staged in enumerate(rgr.staged_predict(x_test), 1)
],
columns=['n', 'Test Error']
).set_index('n').plot()
fig.show(...