Sequence-to-sequence (Seq2Seq) models
We talked in detail about the sequence-to-sequence (Seq2Seq) architecture and the encoder-decoder paradigm in Chapter 12, Building Blocks of Deep Learning for Time Series. Just to refresh your memory, the Seq2Seq model is a kind of an encoder-decoder model by which an encoder encodes the sequence into a latent representation, and then the decoder steps in to carry out the task at hand using this latent representation. This setup is inherently more flexible because of the separation between the encoder (which does the representation learning) and the decoder, which uses the representation for predictions. One of the biggest advantages of this approach, from a time series forecasting perspective, is that the restriction of single step ahead is taken out. Under this modeling pattern, we can extend the forecast to any forecast horizon we want.
In this section, let’s put together a few encoder-decoder models and test out our single-step-ahead...