Part 2: Fundamentals
In this second part of the book, we will delve into the process of constructing node representations using graph learning. We will start by exploring traditional graph learning techniques, drawing on the advancements made in natural language processing. Our aim is to understand how these techniques can be applied to graphs and how they can be used to build node representations.
We will then move on to incorporating node features into our models and explore how they can be used to build even more accurate representations. Finally, we will introduce two of the most fundamental GNN architectures, the Graph Convolutional Network (GCN) and the Graph Attention Network (GAT). These two architectures are the building blocks of many state-of-the-art graph learning methods and will provide a solid foundation for the next part.
By the end of this part, you will have a deeper understanding of how traditional graph learning techniques, such as random walks, can be used...