Principles of autoencoders
In this section, we're going to go over the principles of autoencoders. In this section, we're going to be looking at autoencoders with the MNIST dataset, which we were first introduced to in the previous chapters.
Firstly, we need to be made aware that an autoencoder has two operators, these are:
Encoder: This transforms the input, x, into a low-dimensional latent vector, z = f(x). Since the latent vector is of low dimension, the encoder is forced to learn only the most important features of the input data. For example, in the case of MNIST digits, the important features to learn may include writing style, tilt angle, roundness of stroke, thickness, and so on. Essentially, these are the most important information needed to represent digits zero to nine.
Decoder: This tries to recover the input from the latent vector,
. Although the latent vector has a low dimension, it has a sufficient size to allow the decoder to recover the input data.
The goal of the decoder...