Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Data Science with Python

You're reading from   Practical Data Science with Python Learn tools and techniques from hands-on examples to extract insights from data

Arrow left icon
Product type Paperback
Published in Sep 2021
Publisher Packt
ISBN-13 9781801071970
Length 620 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Nathan George Nathan George
Author Profile Icon Nathan George
Nathan George
Arrow right icon
View More author details
Toc

Table of Contents (30) Chapters Close

Preface 1. Part I - An Introduction and the Basics
2. Introduction to Data Science FREE CHAPTER 3. Getting Started with Python 4. Part II - Dealing with Data
5. SQL and Built-in File Handling Modules in Python 6. Loading and Wrangling Data with Pandas and NumPy 7. Exploratory Data Analysis and Visualization 8. Data Wrangling Documents and Spreadsheets 9. Web Scraping 10. Part III - Statistics for Data Science
11. Probability, Distributions, and Sampling 12. Statistical Testing for Data Science 13. Part IV - Machine Learning
14. Preparing Data for Machine Learning: Feature Selection, Feature Engineering, and Dimensionality Reduction 15. Machine Learning for Classification 16. Evaluating Machine Learning Classification Models and Sampling for Classification 17. Machine Learning with Regression 18. Optimizing Models and Using AutoML 19. Tree-Based Machine Learning Models 20. Support Vector Machine (SVM) Machine Learning Models 21. Part V - Text Analysis and Reporting
22. Clustering with Machine Learning 23. Working with Text 24. Part VI - Wrapping Up
25. Data Storytelling and Automated Reporting/Dashboarding 26. Ethics and Privacy 27. Staying Up to Date and the Future of Data Science 28. Other Books You May Enjoy
29. Index

Feature importance from tree-based methods

Feature importance, also called variable importance, can be calculated from tree-based methods by summing the reduction in Gini or entropy over all the trees for each variable.

So, if a particular variable is used to split the data and reduces the Gini or entropy value by a large amount, that feature is important for making predictions. This is a nice contrast to using coefficient-based feature importance from logistic or linear regression, because tree-based feature importances are non-linear. There are other ways of calculating feature importance as well, such as permutation feature importance and SHAP (SHapley Additive exPlanations).

Using H2O for feature importance

We can easily get the importances with drf.varimp(), or plot them with drf.varimp_plot(server=True). The server=True argument uses matplotlib, which allows us to do things such as directly saving the figure with plt.savefig(). The result looks like this:

...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime