The LIME explainer
In this section, we will implement the LIME explainer, generate an explanation, explore some simulations, and interpret the results with visualizations.
However, though I eluded ethical issues by selecting bland datasets, the predictions led to strange explanations!
Before creating the explainer in Python, we must sum up the tools we have before making interpretations and generating explanations.
We represented an equation of LIME in the A mathematical representation of LIME section of this chapter:
argmin searches the closest area possible around a prediction and finds the features that make a prediction fall into one class or another.
LIME will thus explain how a prediction was made, whatever the model is and however it reached that prediction.
Though LIME does not know which model was chosen by our experimental AutoML, we do. We know that we chose the best model, , among five models in a set named :
LIME does not know that...