Explaining a model's outcome with LIME
Now we are moving on to black box models. They are becoming much more common due to the efficacy they have shown in popular areas of the domain, such as NLP, vision problems, and various other areas where vast amounts of data being fed in produce amazing results. These domains aren't going anywhere, and so we need to find a way to interpret these models after the fact using post-hoc interpretability.
The first approach that we'll look at is Local Interpretable Model-Agnostic Explanations (LIME), which assumes that if you zoom in on even a complex nonlinear relationship, you will find a linear one at the local level. It then will try to learn this local linear relationship by creating synthetic records that are like the record we care about. By creating these points/records that have slightly altered inputs, it can figure out the impact that each feature has based on the model's output. As the name suggests, its model agnostic...