Implementing the LightGCN architecture
The LightGCN [4] architecture aims to learn representations for nodes by smoothing features over the graph. It iteratively performs graph convolution, where neighboring nodes’ features are aggregated as the new representation of a target node. The entire architecture is summarized in Figure 17.6.
Figure 17.6 – LightGCN model architecture with convolution and layer combination
However, LightGCN
adopts a simple weighted sum aggregator rather than using feature transformation or nonlinear activation as seen in other models such as the GCN or GAT. The light graph convolution operation calculates the -th user and item embedding and as follows:
The symmetric normalization term ensures that the scale of embeddings does not increase with graph convolution operations. In contrast to other models, LightGCN
only aggregates the connected neighbors and does not...