In Chapter 4, Implementing Autoencoders with Keras, we learned about autoencoders. We know that an autoencoder learns to represent input data in a latent feature space of reduced dimensions. It learns an arbitrary function to express input data in a compressed latent representation. A variational autoencoder (VAE), instead of learning an arbitrary function, learns the parameters of the probability distribution of the compressed representation. If we can sample points from this distribution, we can generate new data. A VAE consists of an encoder network and a decoder network.
The structure of a VAE is illustrated in the following diagram:
Let's understand the roles of encoder and decoder networks in a VAE:
- Encoder: This is a neural network that takes an input, , and outputs a latent representation, . The goal of an encoder...