An autoencoder is yet another deep learning architecture that can be used for many interesting tasks, but it can also be considered as a variation of the vanilla feed-forward neural network, where the output has the same dimensions as the input. As shown in Figure 1, the way autoencoders work is by feeding data samples (x1,...,x6) to the network. It will try to learn a lower representation of this data in layer L2, which you might call a way of encoding your dataset in a lower representation. Then, the second part of the network, which you might call a decoder, is responsible for constructing an output from this representation . You can think of the intermediate lower representation that the network learns from the input data as a compressed version of it.
Not very different from all the other deep learning architectures that we have seen so far, autoencoders...