Choosing XAI techniques
Fundamental XAI principles warrant AI systems to provide some forms of explanations, evidence, and information apprehensible by humans to justify model output. The main objective of XAI is to make AI decisions comprehensible to everyday users. Choosing an XAI technique for deep learning anomaly detection requires careful deliberation and thorough analysis of the following aspects:
- Analyze stakeholders and scope of explainability: Know your audience, understand their roles, and discover what matters to them. Gather functional and non-functional requirements using questionnaires or existing documentation – for example, what qualifies as an anomaly in their business domain? What is an acceptable threshold when detecting outliers? What will they do with the explanations? Do they need reasoning for every prediction or aggregated explanation to understand the overall model behavior?
- Identify data modalities: XAI techniques differ by the data input...