Summary
Explainability will be indispensable as ML becomes mainstream. Interpretable ML helps validate models, prevents correct predictions for the wrong reasons, and increases user trust, resulting in broader adoption. For example, we do not want a credit application to deny unqualified applicants based on gender instead of poor payment history.
Interpretable ML uncovers new insights by helping humans comprehend the model’s decision-making process. Today, users demand information about causal links in addition to probabilities based on statistical relationships. Despite the lack of consensus on design standards, interpretable ML techniques face challenges with benchmarking methods. In the next chapter, we will explore backpropagation and perturbation XAI techniques.