Explaining Graph Neural Networks
One of the most common criticisms of NNs is that their outputs are difficult to understand. Unfortunately, GNNs are not immune to this limitation: in addition to explaining which features are important, it is necessary to consider neighboring nodes and connections. In response to this issue, the area of explainability (in the form of explainable AI or XAI) has developed many techniques to better understand the reasons behind a prediction or the general behavior of a model. Some of these techniques have been translated to GNNs, while others take advantage of the graph structure to offer more precise explanations.
In this chapter, we will explore some explanation techniques to understand why a given prediction has been made. We will see different families of techniques and focus on two of the most popular: GNNExplainer and integrated gradients. We will apply the former on a graph classification task using the MUTAG
dataset. Then, we will introduce...