Summary
After reading this chapter, you should know how to use SHAP's KernelExplainer
, as well as its decision and force plot to conduct local interpretations. You also should know how to do the same with LIME's instance explainer for both tabular and text data. Lastly, you should understand the strengths and weaknesses of SHAP's KernelExplainer
and LIME. In the next chapter, we will learn how to create even more human-interpretable explanations of a model's decisions, such as "if X conditions are met, then Y is the outcome".