A Google Brain scientist (Vinyals, O., et al. (2015)) wrote the following:
"Sequences have become first-class citizens in supervised learning thanks to the resurgence of recurrent neural networks. Many complex tasks that require mapping from or to a sequence of observations can now be formulated with the sequence-to-sequence (seq2seq) framework, which employs the chain rule to efficiently represent the joint probability of sequences."
This is astoundingly correct because now the applications have grown. Just think about the following sequence-to-sequence project ideas:
- Document summarization. Input sequence: a document. Output sequence: an abstract.
- Image super resolution. Input sequence: a low-resolution image. Output sequence: a high-resolution image.
- Video subtitles. Input sequence: video. Output sequence: text captions.
- Machine translation. Input sequence: text in source language. Output sequence: text in a target language.
These are exciting...