Introduction to autoencoders
Autoencoders are a class of neural network that attempt to recreate the input as their target using back-propagation. An autoencoder consists of two parts; an encoder and a decoder. The encoder will read the input and compress it to a compact representation, and the decoder will read the compact representation and recreate the input from it. In other words, the autoencoder tries to learn the identity function by minimizing the reconstruction error. They have an inherent capability to learn a compact representation of data. They are at the center of deep belief networks and find applications in image reconstruction, clustering, machine translation, and much more.
You might think that implementing an identity function using deep neural networks is boring, however, the way in which this is done makes it interesting. The number of hidden units in the autoencoder is typically less than the number of input (and output) units. This forces the encoder to learn...