Chapter 2: Key Concepts of Interpretability
This book covers many model interpretation methods: some produce metrics, other visuals, and some both; some depict your model broadly and others granularly. In this chapter, we will learn about two methods, feature importance and decision regions, as well as the taxonomies used to describe these methods. We will also detail what elements hinder machine learning interpretability as a primer to what lies ahead.
The following are the main topics we are going to cover in this chapter:
- Learning about interpretation method types and scopes
- Appreciating what hinders machine learning interpretability