One of the main drawbacks with a number of neural network architectures, including ConvNets (CNNs), is that they do not allow for sequential data to be processed. In other words, a complete feature, for example, an image, has to be presented all at once. So the input is a fixed length tensor, and the output has to be a fixed length tensor. Neither do the output values of previous features affect the current feature in any way. Also, all of the input values (and output values) are taken to be independent of one another. For example, in our fashion_mnist model (Chapter 4, Supervised Machine Learning Using TensorFlow 2), each input fashion image is independent of, and totally ignorant of, previous images.
Recurrent Neural Networks (RNNs) overcome this problem and make a wide range of new applications possible.
In this chapter, we will...