Summary
In this chapter, we learned how to explain AI with human-like reasoning. A bank will grant a loan to a person because their credit card debt is non-existent or low. A consumer might buy a soda because it advertises low sugar levels.
We found how the CEM can interpret a medical diagnosis. This example expanded our representation of the XAI process described in Chapter 1, Explaining Artificial Intelligence with Python.
We then explored a Python program that explained how predictions were reached on the MNIST dataset. Numerous machine learning programs have used MNIST. The CEM's innovative approach explained how the number 5, for example, could be predicted because it lacked features that a 3 or an 8 contain.
We created a CNN, trained it, saved it, and tested it. We also created an autoencoder, trained it, saved it, and tested it.
Finally, we created a CEM explainer that displayed PNs and PPs.
Contrastive explanations shed new light on explainable AI...