Computing global and local attributions with SHAP’s KernelExplainer
Permutation methods make changes to the input to assess how much difference they will make to a model’s output. We first discussed this in Chapter 4, Global Model-Agnostic interpretation methods, but if you recall, there’s a coalitional framework to perform these permutations that will produce the average marginal contribution for each feature across different coalitions of features. This process’s outcome is Shapley values, which have essential mathematical properties such as additivity and symmetry. Unfortunately, Shapley values are costly to compute for datasets that aren’t small, so the SHAP library has approximation methods. One of these methods is KernelExplainer
, which we also explained in Chapter 4 and used in Chapter 5, Local Model-Agnostic Interpretation Methods. It approximates the Shapley values with a weighted local linear regression, just like LIME does.