Exploring autoencoder variations
For tabular data, the network structure can be pretty straightforward. It simply uses an MLP with multiple fully connected layers that gradually shrink the number of features for the encoder, and multiple fully connected layers that gradually increase the data outputs to the same dimension and size as the input for the decoder.
For time-series or sequential data, RNN-based autoencoders can be used. One of the most cited research projects about RNN-based autoencoders is a version where LSTM-based encoders and decoders are used. The research paper is called Sequence to Sequence Learning with Neural Networks by Ilya Sutskever, Oriol Vinyals, and Quoc V. Le (https://arxiv.org/abs/1409.3215). Instead of stacking encoder LSTMs and decoder LSTMs, using the hidden state output sequence of each of the LSTM cells vertically, the decoder layer sequentially continues the sequential flow of the encoder LSTM and outputs the reconstructed input in reversed order...