Summary
In this chapter, we confirmed that an AI system must base its approach on trust. A user must understand predictions and on what criteria an ML model produces its outputs.
LIME tackles AI explainability locally, where misunderstandings hurt the human-machine relationship. LIME's explainer does not satisfy itself with an accurate global model. It digs down to explain a prediction locally.
In this chapter, we installed LIME and retrieved newsgroup texts on electronics and space. We vectorized the data and created several models.
Once we implemented several models and the LIME explainer, we ran an experimental AutoML module. The models were all activated to generate predictions. The accuracy of each model was recorded and compared to its competitors. The best model then made predictions for LIME explanations.
Also, the final score of each model showed which model had the best performance with this dataset. We saw how LIME could explain...