Patch Time Series Transformer (PatchTST)
In 2021, Alexey Dosovitskiy et al. proposed Vision Transformer, which introduced the Transformer architecture which was widely successful in Natural Language Processing to Vision. Although not the first to introduce patching, they applied it in a way that works really well for vision. The design broke up an image into patches and fed the transformer each patch in sequence.
Reference check:
The research paper by Alexey Dosovitskiy et al. on Vision Tranformers and Yuqi Nie et al. on PatchTST are cited in the References section as 12 and 13, respectively.
Fast-forward to 2023, and we have the same patching design applied to time series forecasting. Yuqi Nie et al. proposed Patch Time Series Transformer (PatchTST) by adopting the patching design for time series. They were motivated by the apparent ineffectiveness of more complicated Transformer designs (like Autoformer and Informer) on time series forecasting.
In...