We just learned and gained practical experience with RBM and its variant, DBN, in the previous sections. Recall that an RBM is composed of an input layer and a hidden layer, which attempts to reconstruct the input data by finding a latent representation of the input. The neural network model autoencoders (AEs) that we will learn about, starting from this section, share a similar idea. A basic AE is made up of three layers: the input, hidden, and output layers. The output layer is a reconstruction of the input through the hidden layer. A general diagram of AE is depicted as follows:
As we can see, when the autoencoder takes in data, it first encodes it to fit the hidden layer, and then it tries to reconstruct it back to the original input data. Meanwhile, the hidden layer can extract a latent representation of the input data. Because of this structure, the...