Variational autoencoders
In an autoencoder, the decoder samples directly from latent variables. Variational autoencoders (VAEs), which were invented in 2014, differ in that the sampling is taken from a distribution parameterized by the latent variables. To be clear, let's say we have an autoencoder with two latent variables, and we draw samples randomly and get two samples of 0.4 and 1.2. We then send them to the decoder to generate an image.
In a VAE, these samples don't go to the decoder directly. Instead, they are used as a mean and variance of a Gaussian distribution, and we draw samples from this distribution to be sent to the decoder for image generation. As this is one of the most important distributions in machine learning, so let's go over some basics of Gaussian distributions before creating a VAE.
Gaussian distribution
A Gaussian distribution is characterized by two parameters – mean and variance. I think we are all familiar with the different...