Key Concepts of Interpretability
This book covers many model interpretation methods. Some produce metrics, others create visuals, and some do both; some depict models broadly and others granularly. In this chapter, we will learn about two methods, feature importance and decision regions, as well as the taxonomies used to describe these methods. We will also detail what elements hinder machine learning interpretability as a primer to what lies ahead.
The following are the main topics we are going to cover in this chapter:
- Learning about interpretation method types and scopes
- Appreciating what hinders machine learning interpretability
Let’s start with our technical requirements.