Introducing explanation techniques
GNN explanation is a recent field that is heavily inspired by other XAI techniques [1]. We divide it into local explanations on a per-prediction basis and global explanations for entire models. While understanding the behavior of a GNN model is desirable, we will focus on local explanations that are more popular and essential to get insight into a prediction.
In this chapter, we distinguish between “interpretable” and “explainable” models. A model is called “interpretable” if it is human-understandable by design, such as a decision tree. On the other hand, it is “explainable” when it acts as a black box whose predictions can only be retroactively understood using explanation techniques. This is typically the case with NNs: their weights and biases do not provide clear rules like a decision tree, but their results can be explained indirectly.
There are four main categories of local explanation...