In this chapter, we're going to talk about gated recurrent units (GRU). We will also compare them to LSTMs, which we learned about in the previous chapter. As you know, LSTMs have been around since 1987 and are among the most widely used models in Deep Learning for NLP today. GRUs, however, were first presented in 2014, are a simpler variant of LSTMs that share many of the same properties, train easier and faster, and typically have less computational complexity.
In this chapter, we will learn about the following:
- GRUs
- How GRUs differ from LSTMs
- How to implement a GRU
- GRU, LTSM, RNN, and Feedforward comparisons
- Network differences