Understanding classifications with perturbation-based attribution methods
The code for this section alone can be found here: https://github.com/PacktPublishing/Interpretable-Machine-Learning-with-Python/blob/master/Chapter08/FruitClassifier_part2.ipynb. All the preparation steps are repeated from the beginning. However, it has disabled TensorFlow 2 behavior (tf.compat.v1.disable_v2_behavior()
) because, at the time of writing, the alibi
library, which we will use for the contrastive explanation method, still relies on TensorFlow 1 constructs.
Perturbation-based methods have already been covered to a great extent in this book so far. So many of the methods we have covered, including SHAP, LIME, Anchors, and even Permutation Feature Importance, employ perturbation-based strategies. The intuition behind them is that if you remove, alter, or mask features in your input data and then make predictions with them, you'll be able to attribute the difference between the new predictions...