Autoencoders, also known as Diabolo networks or autoassociators, was initially proposed in the 1980s by Hinton and the PDP group [1]. They are feedforward networks, without any feedback, and they learn via unsupervised learning. Like multiplayer perceptrons of Chapter 3, Neural Networks-Perceptrons, they use the backpropagation algorithm to learn, but with a major difference--the target is the same as the input.
We can think of an autoencoder as consisting of two cascaded networks--the first network is an encoder, it takes the input x, and encodes it using a transformation h to encoded signal y:
The second network uses the encoded signal y as its input and performs another transformation f to get a reconstructed signal r:
We define error e as the difference between the original input x and the reconstructed signal r, e = x - r. The network...