Explaining a model's outcome with SHAP
Not long after LIME came out, another tool was introduced to help with interpreting AI models, SHapley Additive exPlanations (SHAP). The main function of SHAP is that if you take permutations of the different input features, you can determine how important each feature is to the outcome. This might sound like LIME, and that's because it took inspiration from it while also using some new concepts. There are, however, key differences, which we'll explain.
Avoid confusion with Shapley values
This approach is based on Shapley values, but is not the same thing. SHAP is an iteration with Shapley game theory at its base. There are various forms of SHAP, such as kernel and TreeSHAP. Also, there are global interpretations that SHAP allows, which again are building onto what Shapley values allows.
Let's look at an example to make it a bit clearer how SHAP is used.
Let's say you are a player in a two-on-two basketball...