In Chapter 3, Deep Learning with ConvNets, we learned about convolutional neural networks (CNN) and saw how they exploit the spatial geometry of their input. For example, CNNs apply convolution and pooling operations in one dimension for audio and text data along the time dimension, in two dimensions for images along the (height x width) dimensions and in three dimensions, for videos along the (height x width x time) dimensions.
In this chapter, we will learn about recurrent neural networks (RNN), a class of neural networks that exploit the sequential nature of their input. Such inputs could be text, speech, time series, and anything else where the occurrence of an element in the sequence is dependent on the elements that appeared before it. For example, the next word in the sentence the dog... is more likely to be barks than car, therefore, given such a sequence, an RNN is...