Recall that in deep feedforward networks, autoencoder neural networks, and CNNs, which we discussed in the previous chapters, data flows one-way from the input layer to the output layer. However, deep learning models allow data to proceed in any direction—even circle back to the input layer—and are not limited to feedforward architectures. Data looping back from the previous output becomes a part of the next input data. RNNs are great examples of this. The general form of RNNs is depicted in the following diagram, and we will be working on several variants of RNNs throughout this chapter:
As we can see in the preceding diagram, data from previous time points goes into the training of the current time point. The recurrent architecture makes the models work well with time series (such as product sales, stock price) or sequential inputs (words in articles...