Chapter 12, Cognitive XAI
- SHapley Additive exPlanations (SHAP) compute the marginal contribution of each feature with a SHAP value. (True|False)
True.
- Google's What-If Tool can display SHAP values as counterfactual data points. (True|False)
True.
- Counterfactual explanations include showing the distance between two data points. (True|False)
True.
- The contrastive explanations method (CEM) has an interesting way of interpreting the absence of a feature in a prediction. (True|False)
True.
- Local Interpretable Model-agnostic Explanations (LIME) interpret the vicinity of a prediction. (True|False)
True.
- Anchors show the connection between features that can occur in positive and negative predictions. (True|False)
True.
- Tools such as Google Location History can provide additional information to explain the output of a machine learning model. (True|False...