An autoencoder is a network with three or more layers, where the input layer and the output have the same number of neurons, and those intermediate (hidden) layers have a lower number of neurons. The network is trained to simply reproduce in output, for each input data, the same pattern of activity in the input.
The remarkable aspect of the problem is that, due to the lower number of neurons in the hidden layer, if the network can learn from examples, and can generalize to an acceptable extent, it performs data compression: the status of the hidden neurons provide, for each example, a compressed version of the input and output common states.
In the first examples of such networks, in the 1980s, a compression of simple images was obtained in this way. This was not far for services to that obtainable with standard methods and more complicated.
Interest in autoencoders was recently revived...