Vanilla autoencoders
The vanilla autoencoder, as proposed by Hinton in his 2006 paper Reducing the Dimensionality of Data with Neural Networks, consists of one hidden layer only. The number of neurons in the hidden layer is fewer than the number of neurons in the input (or output) layer.
This results in producing a bottleneck effect in the flow of information in the network. The hidden layer (y) between the encoder input and decoder output is also called the “bottleneck layer.” Learning in the autoencoder consists of developing a compact representation of the input signal at the hidden layer so that the output layer can faithfully reproduce the original input.
In Figure 8.2, you can see the architecture of a vanilla autoencoder:
Figure 8.2: Architecture of the vanilla autoencoder
Let’s try to build a vanilla autoencoder. While in the paper Hinton used it for dimension reduction, in the code to follow, we will use autoencoders for image reconstruction...