In this chapter, we tried to broaden the concepts underlying standard neural networks by adding features to solve more complex problems. To begin with, we discovered the architecture of CNNs. CNNs are ANNs in which the hidden layers are usually constituted by convolutional layers, pooling layers, FC layers, and normalization layers. The concepts underlying CNN were covered.
We understood training, testing, and evaluating a CNN through the analysis of a real case. For this purpose, an HWR problem was addressed in Google Cloud Platform.
Then, we explored RNN. Recurrent networks take, as their input, not only current input data that is powered to the network but also what they have experienced over time. Several RNN architectures were analyzed. In particular, we focused on LSTM networks.