Introducing LIME
LIME stands for Local Interpretable Model-agnostic Explanations. LIME explanations can help a user trust an AI system. A machine learning model often trains at least 100 features to reach a prediction. Showing all these features in an interface makes it nearly impossible for a user to analyze the result visually.
In Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP, we used SHAP to calculate the marginal contribution of a feature to the model and for a given prediction. The Shapley value of a feature represents its contribution to one or several sets of features. LIME has a different approach.
LIME wants to find out whether a model is locally faithful regardless of the model. Local fidelity verifies how a model represents the features around a prediction. Local fidelity might not fit the model globally, but it explains how the prediction was made. In the same way, a global explanation of the model might not explain a ...