Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Explainable AI (XAI) with Python

You're reading from   Hands-On Explainable AI (XAI) with Python Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps

Arrow left icon
Product type Paperback
Published in Jul 2020
Publisher Packt
ISBN-13 9781800208131
Length 454 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Denis Rothman Denis Rothman
Author Profile Icon Denis Rothman
Denis Rothman
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Explaining Artificial Intelligence with Python 2. White Box XAI for AI Bias and Ethics FREE CHAPTER 3. Explaining Machine Learning with Facets 4. Microsoft Azure Machine Learning Model Interpretability with SHAP 5. Building an Explainable AI Solution from Scratch 6. AI Fairness with Google's What-If Tool (WIT) 7. A Python Client for Explainable AI Chatbots 8. Local Interpretable Model-Agnostic Explanations (LIME) 9. The Counterfactual Explanations Method 10. Contrastive XAI 11. Anchors XAI 12. Cognitive XAI 13. Answers to the Questions 14. Other Books You May Enjoy
15. Index

Summary

In this chapter, we explored how to explain the output of a machine learning algorithm with an agnostic model approach using SHapley Additive exPlanations (SHAP). SHAP provides an excellent way to explain models by just analyzing their input data and output predictions.

We saw that SHAP relies on the Shapley value to explain the marginal contribution of a feature in a prediction. We started by understanding the mathematical foundations of the Shapley value. We then applied the Shapley value equation to a sentiment analysis example. With that in mind, we got started with SHAP.

We installed SHAP, imported the modules, imported the dataset, and split it into a training dataset and a testing dataset. Once that was done, we vectorized the data to run a linear model. We created the SHAP linear model explainer and visualized the marginal contribution of the features of the dataset in relation to the sentiment analysis predictions of reviews. A positive review prediction value...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image