Understanding the current landscape of ML interpretability
First, we will provide some context on how the book relates to the main goals of ML interpretability and how practitioners can start applying the methods to achieve those broad goals. Then, we'll discuss what the current areas of growth in research are.
Tying everything together!
As discussed in Chapter 1, Interpretation, Interpretability, and Explainability; and Why Does It All Matter?, there are three main themes when talking about ML interpretability: Fairness, Accountability, and Transparency (FAT), and each of these presents a series of concerns (see Figure 14.1). I think we can all agree these are all desirable properties for a model! Indeed, these concerns all present opportunities for the improvement of Artificial Intelligence (AI) systems. These improvements start by leveraging model interpretation methods to evaluate models, confirm or dispute assumptions, and find problems.
What your aim is will depend...