A Gated Recurrent Unit (GRU) is a type of recurrent block that was introduced in 2014 (Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, https://arxiv.org/abs/1406.1078 and Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, https://arxiv.org/abs/1412.3555) as an improvement over LSTM. A GRU unit usually has similar or better performance than an LSTM, but it does so with fewer parameters and operations:
A GRU cell
Similar to the classic RNN, a GRU cell has a single hidden state, ht. You can think of it as a combination of the hidden and cell states of an LSTM. The GRU cell has two gates:
- An update gate, zt, which combines the input and forget LSTM gates. It decides what information to discard and what new information to include in its place, based on the network input, xt...