Time Series Dense Encoder (TiDE)
We saw earlier in the chapter that a linear family of models outperformed quite a lot of Transformer models. In 2023, Das et al. from Google proposed a model that extends that idea into non-linearity. They argued that the linear models will fall short where there are inherent non-linearities in the dependence between the future and the past. The inclusion of covariates compounds this problem.
Reference check:
The research paper by Das et al. on TiDE is cited in the References section as 20.
Therefore, they introduced a simple and efficient Multi-Layer Perceptron (MLP) based architecture for long-term time series forecasting. The model essentially encodes the past of the time series, along with the covariates using dense MLPs, and then decodes this latent representation into a forecast. The model assumes channel independence (similar to PatchTST) and considers different related time series in a multivariate problem as separate...