A cognitive approach to vectorizers
AI and XAI outperform us in many cases. This is a good thing because that's what we designed them for! What would we do with slow and imprecise AI?
However, in some cases, we not only request an AI explanation, but we also need to understand it.
In Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME), we reached several interesting conclusions. However, we left with an intriguing comment on the dataset.
In this section, we will use our human cognitive abilities, not only to explain, but to understand the third of the conclusions we made in Chapter 8:
- LIME can prove that even accurate predictions cannot be trusted without XAI
- Local interpretable models will measure to what extent we can trust a prediction
- Local explanations might show that the dataset cannot be trusted to produce reliable predictions
- Explainable AI can prove that a model cannot be trusted or that it is reliable
- LIME...