Summary
In this chapter, we defined XAI, a new approach to AI that develops users' trust in the system. We saw that each type of user requires a different level of explanation. XAI also varies from one aspect of a process to another. An explainable model applied to input data will have specific features, and explainability for machine algorithms will use other functions.
With these XAI methods in mind, we then build an experimental KNN program that could help a general practitioner make a diagnosis when the same symptoms could lead to several diseases.
We added XAI to every phase of an AI project introducing explainable interfaces for the input data, the model used, the output data, and the whole reasoning process that leads to a diagnosis. This XAI process made the doctor trust AI predictions.
We improved the program by adding the patient's Google Location History data to the KNN model using a Python program to parse a JSON file. We also added information on...