Defining explainable AI
Explainable AI, or AI explaining, or AI explainability, or simply XAI, seems simple. You just take an AI algorithm and explain it. It seems so elementary that you might even wonder why we are bothering to write a book on this!
Before the rise of XAI, the typical AI workflow was minimal. The world and activities surrounding us produce datasets. These datasets were put through black-box AI algorithms, not knowing what was inside. Finally, human users had to either trust the system or initiate an expensive investigation. The following diagram represents the former AI process:
Figure 1.1: AI process
In a non-XAI approach, the user is puzzled by the output. The user does not trust the algorithm and does not understand from the output whether the answer is correct or not. Furthermore, the user does not know how to control the process.
In a typical XAI approach, the user obtains answers, as shown in the following diagram. The user trusts the algorithm...