Understanding the attention mechanism
In this section, we’ll discuss several iterations of the attention mechanism in the order that they were introduced.
Bahdanau attention
The first attention iteration (Neural Machine Translation by Jointly Learning to Align and Translate, https://arxiv.org/abs/1409.0473), known as Bahdanau attention, extends the seq2seq model with the ability for the decoder to work with all encoder hidden states, not just the last one. It is an addition to the existing seq2seq model, rather than an independent entity. The following diagram shows how Bahdanau attention works:
Figure 7.2 – The attention mechanism
Don’t worry—it looks scarier than it is. We’ll go through this diagram from top to bottom: the attention mechanism works by plugging an additional context vector, , between the encoder and the decoder. The hidden decoder state at time t is now a function not only of the hidden state and decoder output at...