The networks that we've looked at so far have done some truly amazing things. But they've all had one pretty big limitation: they can only be applied to problems where the output is of a fixed and well-known size.
Sequence-to-sequence models are able to map sequences of inputs to sequences of outputs with variable lengths.
You might also see the terms sequence-to-sequence or even Seq2Seq. These are all terms for sequence-to-sequence models.
When using a sequence-to-sequence model, we will take a sequence in and get a sequence out in exchange. These sequences don't have to be the same length. Sequence-to-sequence models allow us to learn a mapping between the input sequence and the output sequence.
There are a variety of applications where sequence-to-sequence models might be useful, and we will talk about those applications next.
...