Discussing GRUs and attention-based models
In the final section of this chapter, we will briefly look at GRUs, how they are similar yet different from LSTMs, and how to initialize a GRU model using PyTorch. We will also look at attention-based (RNNs). We will conclude this section by describing how attention-only (no recurrence or convolutions)-based models outperform the recurrent family of neural models when it comes to sequence modeling tasks.
GRUs and PyTorch
As we discussed in the Exploring the evolution of recurrent networks section, GRUs are a type of memory cell with two gates – a reset gate and an update gate, as well as one hidden state vector. In terms of configuration, GRUs are simpler than LSTMs and yet equally effective in dealing with the exploding and vanishing gradients problem. Tons of research has been done to compare the performance of LSTMs and GRUs. While both perform better than the simple RNNs on various sequence-related tasks, one is slightly better...