Summary
In this chapter, we explored the field of XAI applied to GNNs. Explainability is a key component in many areas and can help us to build better models. We saw different techniques to provide local explanations and focused on GNNExplainer (a perturbation-based method) and integrated gradients (a gradient-based method). We implemented them on two different datasets using PyTorch Geometric and Captum to obtain explanations for graph and node classification. Finally, we visualized and discussed the results of these techniques.
In Chapter 15, Forecasting Traffic Using A3T-GCN, we will revisit temporal GNNs to predict future traffic on a road network. In this practical application, we will see how to translate roads into graphs and apply a recent GNN architecture to forecast short-term traffic accurately.