Introducing LSTMs
While RNNs allow us to use sequences of words as input to our models, they are far from perfect. RNNs suffer from two main flaws, which can be partially remedied by using a more sophisticated version of the RNN, known as LSTM.
The basic structure of RNNs means that it is very difficult for them to retain information long term. Consider a sentence that's 20 words long. From our first word in the sentence affecting the initial hidden state to the last word in the sentence, our hidden state is updated 20 times. From the beginning of our sentence to our final hidden state, it is very difficult for an RNN to retain information about words at the beginning of the sentence. This means that RNNs aren't very good at capturing long-term dependencies within sequences. This also ties in with the vanishing gradient problem mentioned earlier, where it is very inefficient to backpropagate through long, sparse sequences of vectors.
Consider a long paragraph where...