Computing global and local attributions with SHAP's KernelExplainer
Permutation methods make changes to the input to assess how much difference they will produce to a model's output. We first discussed this in Chapter 4, Fundamentals of Feature Importance and Impact, but if you recall, there's a coalitional framework to perform these permutations that will produce the average marginal contribution for each feature across different coalitions of features. This process's outcome is Shapely Values, which have essential mathematical properties such as additivity and symmetry. Unfortunately, shapely values are costly to compute for datasets that aren't small, so the SHAP library has approximation methods. One of these methods is the KernelExplainer, which we used in Chapter 5, Global Model-Agnostic Interpretation Methods. It approximates the Shapely Values with a weighted local linear regression, just like LIME does.
Why use the KernelExplainer?
We have a...