Before we learn about geometric deep learning techniques, it is important for us to understand the differences between Euclidean and non-Euclidean data, and why we need a separate approach to deal with it.
Deep learning architectures such as FNNs, CNNs, and RNNs have proven successful for a variety of tasks, such as speech recognition, machine translation, image reconstruction, object recognition and segmentation, and motion tracking, in the last 8 years. This is because of their ability to exploit and use the local statistical properties that exist within data. These properties include stationarity, locality, and compositionality. In the case of CNNs, the data they take as input can be represented in a grid form (such as images, which can be represented by matrices and tensors).
The stationarity, in this case (images), comes from the...