Building a bidirectional LSTM
So far, we have trained and tested a simple RNN model on the sentiment analysis task, which is a binary classification task based on textual data. In this section, we will try to improve our performance on the same task by using a more advanced recurrent architecture – LSTMs.
LSTMs, as we know, are more capable of handling longer sequences due to their memory cell gates, which help retain important information from several time steps before and forget irrelevant information even if it was recent. With the exploding and vanishing gradients problem in check, LSTMs should be able to perform well when processing long movie reviews.
Moreover, we will be using a bidirectional model as it broadens the context window at any time step for the model to make a more informed decision about the sentiment of the movie review. The RNN model we looked at in the previous exercise overfitted the dataset during training, so to tackle that, we will be using dropouts...