Summary
In this chapter, we improved our vanilla GNN layer to correctly normalize features. This enhancement introduced the GCN layer and smart normalization. We compared this new architecture to Node2Vec and our vanilla GNN on the Cora and Facebook Page-Page datasets. Thanks to this normalization process, the GCN obtained the highest accuracy scores by a large margin in both cases. Finally, we applied it to node regression with the Wikipedia Network and learned how to handle this new task.
In Chapter 7, Graph Attention Networks, we will go a step further by discriminating neighboring nodes based on their importance. We will see how to automatically weigh node features through a process called self-attention. This will improve our performance, as we will see by comparing it to the GCN architecture.