Summary
In this chapter, you learned the difference between bagging and boosting. You learned how gradient boosting works by building a gradient boosting regressor from scratch. You implemented a variety of gradient boosting hyperparameters, including learning_rate
, n_estimators
, max_depth
, and subsample
, which results in stochastic gradient boosting. Finally, you used big data to predict whether stars have exoplanets by comparing the times of GradientBoostingClassifier
and XGBoostClassifier
, with XGBoostClassifier
emerging as twice to over ten times as fast and more accurate.
The advantage of learning these skills is that you now understand when to apply XGBoost rather than similar machine learning algorithms such as gradient boosting. You can now build stronger XGBoost and gradient boosting models by properly taking advantage of core hyperparameters, including n_estimators
and learning_rate
. Furthermore, you have developed the capacity to time all computations instead of relying...