So far in this book, we have looked at the applications of neural networks for supervised learning. Specifically, in each project, we have a labeled dataset (that is, features x and label y) and our goal is to train a neural network using this dataset, so that the neural network is able to predict label y from any new instance x.
A typical feedforward neural network is shown in the following diagram:

In this chapter, we will study a different class of neural networks, known as autoencoders. Autoencoders represent a paradigm shift from the conventional neural networks we have seen so far. The goal of autoencoders is to learn a Latent Representation of the input. This representation is usually a compressed representation of the original input.
All autoencoders have an Encoder and a Decoder. The role of the encoder is to encode the input to a learned, compressed...