In this section, we'll implement a seq2seq model (an encoder-decoder RNN), based on the LSTM unit, for a simple sequence-to-sequence question-answer task. This model can be trained to map an input sequence (questions) to an output sequence (answers), which are not necessarily of the same length as each other.
This type of seq2seq model has shown impressive performance in various other tasks such as speech recognition, machine translation, question answering, Neural Machine Translation (NMT), and image caption generation.
The following diagram helps us visualize our seq2seq model:
The illustration of the sequence to sequence (seq2seq) model. Each rectangle box is the RNN cell in which blue ones are the encoders and Red been the Decoders.
In the encoder-decoder structure, one RNN (blue) encodes the input sequence. The encoder emits the context C...