Summary
Recurrent neural networks are a type of neural network that explicitly includes inductive biases of sequential data in its structure.
A couple of variations of RNNs exist but all of them maintain the same high-level concept for their overall structure. Mainly, they provide varying ways to decide which data to learn from and remember along with which data to forget from the memory from the remembering stage.
However, do note that a more recent architecture called transformers, which will be introduced in Chapter 6, Understanding Neural Network Transformers, demonstrated that recurrence is not needed to achieve a good performance on sequential data.
With that, we are done with RNNs and will dive briefly into the world of autoencoders in the next chapter.