So far, we've explored solutions for tasks that are not sequence-based, which means they don't require any history and it will not make any difference knowing what image came before the one that is being classified at the moment. In many other tasks it's very important to know the information that accompanies a piece of information. For example, when we speak, a letter might be pronounced in a different way based on what letter comes before after the concerned letter.
Our brain is able to process this information seemingness, and you could argue that providing more information to the Neural Networks (NNs) we saw so far we would be able to process new text.
There is a particular architecture of NNs that aims to solve this problem: Recurrent Neural Networks (RNNs)
The important addition that we will discuss in this chapter is a way to extend the...