Implementing a heterogeneous GNN
In this section, we will implement a heterogeneous GNN using a GraphSAGE
operator. This architecture will allow us to consider both node types (hosts and flows) to build better embeddings. This is done by duplicating and sharing messages across different layers, as shown in the following figure.
Figure 16.5 – Architecture of the heterogeneous GNN
We will implement three layers of SAGEConv
with LeakyRELU
for each node type. Finally, a linear layer will output a five-dimensional vector, where each dimension corresponds to a class. Furthermore, we will train this model in a supervised way using the cross-entropy loss and the Adam
optimizer:
- We import the relevant neural network layers from PyTorch Geometric:
import torch_geometric.transforms as T from torch_geometric.nn import Linear, HeteroConv, SAGEConv
- We define the heterogeneous GNN with three parameters: the number of hidden dimensions, the number of...