In this chapter, we have briefly presented the most important deep learning layers, and we have discussed two concrete examples based on Keras. We have seen how to model a deep convolutional network to classify images and how an LSTM model can be easily employed when it's necessary to learn short- and long-term dependencies in a time-series.
We also saw how TensorFlow computes the gradients of an output tensor with respect to any previous connected layer, and therefore how it's possible to implement the standard back propagation strategy seamlessly to deep architectures. We haven't discussed the actual deep learning problems and methods in detail because they require much more space. However, the reader can easily find many valid resources to continue their exploration of this fascinating field. At the same time, it's possible to modify the examples...