The models we have learned up to now were learning using supervised learning. In this section, we will learn about autoencoders. They are feedforward, non-recurrent neural network, and learn through unsupervised learning. They are the latest buzz, along with generative adversarial networks, and we can find applications in image reconstruction, clustering, machine translation, and much more. They were initially proposed in the 1980s by Geoffrey E. Hinton and the PDP group (http://www.cs.toronto.edu/~fritz/absps/clp.pdf).
The autoencoder basically consists of two cascaded neural networks—the first network acts as an encoder; it takes the input x and encodes it using a transformation h to encoded signal y, shown in the following equation:
The second neural network uses the encoded signal y as its input and performs another transformation f to get a reconstructed...