In Chapter 16, Deep Learning, we saw that neural networks are successful at supervised learning by extracting a hierarchical feature representation that's useful for the given task. CNNs, for example, learn and synthesize increasingly complex patterns useful for identifying or detecting objects in an image.
An autoencoder, in contrast, is a neural network designed exclusively to learn a new representation, that is, an encoding of the input. To this end, the training forces the network to faithfully reproduce the input. Since autoencoders typically use the same data as input and output, they are also considered an instance of self-supervised learning.
In the process, the parameters of a hidden layer, h, become the code that represents the input. More specifically, the network can be viewed as consisting of an encoder function, h=f(x), that learns the...