Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Gradient Boosting with XGBoost and scikit-learn

You're reading from  Hands-On Gradient Boosting with XGBoost and scikit-learn

Product type Book
Published in Oct 2020
Publisher Packt
ISBN-13 9781839218354
Pages 310 pages
Edition 1st Edition
Languages
Author (1):
Corey Wade Corey Wade
Profile icon Corey Wade
Toc

Table of Contents (15) Chapters close

Preface 1. Section 1: Bagging and Boosting
2. Chapter 1: Machine Learning Landscape 3. Chapter 2: Decision Trees in Depth 4. Chapter 3: Bagging with Random Forests 5. Chapter 4: From Gradient Boosting to XGBoost 6. Section 2: XGBoost
7. Chapter 5: XGBoost Unveiled 8. Chapter 6: XGBoost Hyperparameters 9. Chapter 7: Discovering Exoplanets with XGBoost 10. Section 3: Advanced XGBoost
11. Chapter 8: XGBoost Alternative Base Learners 12. Chapter 9: XGBoost Kaggle Masters 13. Chapter 10: XGBoost Model Deployment 14. Other Books You May Enjoy

Modifying gradient boosting hyperparameters

In this section, we will focus on the learning_rate, the most important gradient boosting hyperparameter, with the possible exception of n_estimators, the number of iterations or trees in the model. We will also survey some tree hyperparameters, and subsample, which results in stochastic gradient boosting. In addition, we will use RandomizedSearchCV and compare results with XGBoost.

learning_rate

In the last section, changing the learning_rate value of GradientBoostingRegressor from 1.0 to scikit-learn's default, which is 0.1, resulted in enormous gains.

learning_rate, also known as the shrinkage, shrinks the contribution of individual trees so that no tree has too much influence when building the model. If an entire ensemble is built from the errors of one base learner, without careful adjustment of hyperparameters, early trees in the model can have too much influence on subsequent development. learning_rate limits the influence...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime