Variational Autoencoders (VAE) are a more recent take on the autoencoding problem. Unlike autoencoders, which learn a compressed representation of the data, Variational Autoencoders learn the random process that generates such data, instead of learning an essentially arbitrary function as we previously did with our neural networks.
VAEs have also an encoder and decoder part. The encoder learns the mean and standard deviation of a normal distribution that is assumed to have generated the data. The mean and standard deviation are called latent variables because they are not observed explicitly, rather inferred from the data.Â
The decoder part of VAEs maps back these latent space points into the data. As before, we need a loss function to measure the difference between the original inputs and their reconstruction. Sometimes an extra term...