Summary
In this chapter, we defined the expressive power of GNNs. This definition is based on another algorithm, the WL method, which outputs the canonical form of a graph. This algorithm is not perfect, but it can distinguish most graph structures. It inspired the GIN architecture, designed to be as expressive as the WL test and, therefore, strictly more expressive than GCNs, GATs, or GraphSAGE.
We then implemented this architecture for graph classification. We saw different methods to combine node embeddings into graph embeddings. GIN offers a new technique, which incorporates a sum operator and the concatenation of graph embeddings produced by every GIN layer. It significantly outperformed the classic global mean pooling obtained with GCN layers. Finally, we combined predictions made by both models into a simple ensemble, which increased the accuracy score even further.
In Chapter 10, Predicting Links with Graph Neural Networks, we will explore another popular task with GNNs...