Reviewing why having explainability is not enough
Explainability helps us build trust for the users of our models. As you learned in this chapter, you can use explainability techniques to understand how your models generate the outputs for one or multiple instances in a dataset. These explanations could help in improving our models from a performance and fairness perspective. However, we cannot achieve such improvements by simply using these techniques blindly and generating some results in Python. For example, as we discussed in the Counterfactual generation using Diverse Counterfactual Explanations (DiCE) section, some of the generated counterfactuals might not be reasonable and meaningful and we cannot rely on them. Or, when generating local explanations for one or multiple data points using SHAP or LIME, we need to pay attention to the meaning of features, the range of values for each feature and the meaning behind them, and the characteristics of each data point we investigate...