Summary
In this chapter, we learned about the missing link between vanilla neural networks and GNNs. We built our own GNN architecture using our intuition and a bit of linear algebra. We explored two popular graph datasets from the scientific literature to compare our two architectures. Finally, we implemented them in PyTorch and evaluated their performance. The result is clear: even our intuitive version of a GNN completely outperforms the MLP on both datasets.
In Chapter 6, Normalizing Embeddings with Graph Convolutional Networks, we refine our vanilla GNN architecture to correctly normalize its inputs. This graph convolutional network model is an incredibly efficient baseline we’ll keep using in the rest of the book. We will compare its results on our two previous datasets and introduce a new interesting task: node regression.