Vanilla autoencoders
The Vanilla autoencoder, as proposed by Hinton in his 2006 paper Reducing the Dimensionality of Data with Neural Networks, consists of one hidden layer only. The number of neurons in the hidden layer are less than the number of neurons in the input (or output) layer.
This results in producing a bottleneck effect in the flow of information in the network. The hidden layer in between is also called the "bottleneck layer." Learning in the autoencoder consists of developing a compact representation of the input signal at the hidden layer so that the output layer can faithfully reproduce the original input.
In the following diagram, you can see the architecture of Vanilla autoencoder:
Figure 2: Architecture of the Vanilla autoencoder, visualized
Let us try to build a Vanilla autoencoder. While in the paper Hinton used it for dimension reduction, in the code to follow we will use autoencoders for image reconstruction. We will train the autoencoder...