Introduction
In previous chapters, we studied text processing techniques such as word embedding, tokenization, and Term Frequency Inverse Document Frequency (TFIDF). We also learned about a specific network architecture called a Recurrent Neural Network (RNN) that has the drawback of vanishing gradients.
In this chapter, we are going to study a mechanism that deals with vanishing gradients by using a methodical approach of adding memory to the network. Essentially, the gates that are used in GRUs are vectors that decide what information should be passed onto the next stage of the network. This, in turn, helps the network to generate output accordingly.
A basic RNN generally consists of an input layer, output layer, and several interconnected hidden layers. The following diagram displays the basic architecture of an RNN:
Figure 6.1: A basic RNN
RNNs, in their simplest form, suffer from a drawback, that is, their inability to retain long-term relationships in the sequence. To rectify...