RNNs assume that data is sequential so that previous data points impact the current observation and are relevant for predictions of subsequent elements in the sequence.
They allow for more complex and diverse input-output relationships than FFNNs and CNNs, which are designed to map one input to one output vector, usually of a fixed size, and using a given number of computational steps. RNNs, in contrast, can model data for tasks where the input, the output, or both, are best represented as a sequence of vectors.
The following diagram, inspired by Andrew Karpathy's 2015 blog post on the Unreasonable Effectiveness of RNN (you can refer to GitHub for more information, here: https://github.com/PacktPublishing/Hands-On-Machine-Learning-for-Algorithmic-Trading), illustrates mappings from input to output vectors using non-linear transformations carried out by one or...