LTSF-Linear family of models
There has been a lot of debate on whether Transformers are right for forecasting problems, how popular Transformer papers haven’t used strong baselines to show their superiority, how the order-agnostic attention mechanism may not be the best way to approach strongly ordered time series, and so on. The criticism was more pronounced for Long-Term Time Series Forecasting as it relies more on the extraction of strong trends and seasonalities. In 2023, Ailing Zeng et al. decided to put the Transformer models to the test and conducted a wide study using 5 multivariate datasets, pitting five Transformer models (FEDFormer, Autoformer, Informer, Pyraformer, and LogTrans) against a set of simple linear models that they proposed. Surprisingly, the simple linear models they proposed beat all the Transformer models comfortably.
Reference check:
The research papers by Ailing Zeng et al. and the different Transformer models, FEDFormer, Autoformer...