Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Data Science Workshop

You're reading from   The Data Science Workshop A New, Interactive Approach to Learning Data Science

Arrow left icon
Product type Paperback
Published in Jan 2020
Publisher
ISBN-13 9781838981266
Length 818 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (5):
Arrow left icon
Thomas Joseph Thomas Joseph
Author Profile Icon Thomas Joseph
Thomas Joseph
Andrew Worsley Andrew Worsley
Author Profile Icon Andrew Worsley
Andrew Worsley
Robert Thas John Robert Thas John
Author Profile Icon Robert Thas John
Robert Thas John
Anthony So Anthony So
Author Profile Icon Anthony So
Anthony So
Dr. Samuel Asare Dr. Samuel Asare
Author Profile Icon Dr. Samuel Asare
Dr. Samuel Asare
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Introduction to Data Science in Python 2. Regression FREE CHAPTER 3. Binary Classification 4. Multiclass Classification with RandomForest 5. Performing Your First Cluster Analysis 6. How to Assess Performance 7. The Generalization of Machine Learning Models 8. Hyperparameter Tuning 9. Interpreting a Machine Learning Model 10. Analyzing a Dataset 11. Data Preparation 12. Feature Engineering 13. Imbalanced Datasets 14. Dimensionality Reduction 15. Ensemble Learning 16. Machine Learning Pipelines 17. Automated Feature Engineering

Summary

In this chapter, we learned a few techniques for interpreting Machine Learning models. We saw that there are techniques that are specific to the model used: coefficients for linear models and variable importance for tree-based models. There are also some methods that are model-agnostic, such as variable importance via permutation.

All these techniques are global interpreters, which look at the entire dataset and analyze the overall contribution of each variable to predictions. We can use this information not only to identify which variables have the most impact on predictions but also to shortlist them. Rather than keeping all features available from a dataset, we can just keep the ones that have a stronger influence. This can significantly reduce the computation time for training a model or calculating predictions.

We also went through a local interpreter scenario with LIME, which analyzes a single observation. It helped us to better understand the decisions made by...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime