Deep learning architectures that are used widely nowadays are the so-called Recurrent Neural Networks (RNNs). The basic idea of RNNs is to make use of sequential type information in the input.
These networks are recurrent because they perform the same computations for all the elements of a sequence of inputs, and the output of each element depends, in addition to the current input, from all the previous computations.
RNNs have proved to have excellent performance in problems such as predicting the next character in a text or, similarly, the prediction of the next word sequence in a sentence.
However, they are also used for more complex problems, such as Machine Translation (MT). In this case, the network has as input a sequence of words in a source language, while the output will be the translated input sequence in a target language, finally, other applications of great importance in which...