GNNs were introduced in 2005 and have received a lot of attention during the last 5 years or so. The key concept behind them is to try to generalize the ideas behind CNNs and RNNs to apply them to any type of dataset, including graphs. This section is only a short introduction to GNNs, since we would require an entire book to fully explore the topic. As usual, more references are given in the Further reading section if you would like to gain a deeper understanding of this topic.
Extending the principles of CNNs and RNNs to build GNNs
CNNs and RNNs both involve aggregating information from a neighborhood in a special context. For RNNs, the context is a sequence of inputs (words, for instance) and a sequence is nothing more than a special type of graph. The same applies to CNNs, which are used to analyze images, or pixel grids, which are also a special type of graph where each pixel is connected to its adjacent pixels. It is logical therefore to try and use neural...