Chapter 6: XGBoost Hyperparameters
XGBoost has many hyperparameters. XGBoost base learner hyperparameters incorporate all decision tree hyperparameters as a starting point. There are gradient boosting hyperparameters, since XGBoost is an enhanced version of gradient boosting. Hyperparameters unique to XGBoost are designed to improve upon accuracy and speed. However, trying to tackle all XGBoost hyperparameters at once can be dizzying.
In Chapter 2, Decision Trees in Depth, we reviewed and applied base learner hyperparameters such as max_depth
, while in Chapter 4, From Gradient Boosting to XGBoost, we applied important XGBoost hyperparameters, including n_estimators
and learning_rate
. We will revisit these hyperparameters in this chapter in the context of XGBoost. Additionally, we will also learn about novel XGBoost hyperparameters such as gamma
and a technique called early stopping.
In this chapter, to gain proficiency in fine-tuning XGBoost hyperparameters, we will cover the...