A variational autoencoder (VAE) is a generative model that uses Bayesian inference and tries to model the underlying probability distribution of images so that it can sample new images from that distribution. Just like an ordinary autoencoder, it's composed of two components: an encoder (a bunch of layers that will compress the input to the bottleneck in a vanilla autoencoder) and a decoder (a bunch of layers that will reconstruct the input from its compressed representation from the bottleneck in a vanilla autoencoder). The difference between a VAE and an ordinary autoencoder is that instead of mapping an input layer to a latent variable, known as the bottleneck vector, the encoder maps the input to a distribution. The random samples are then drawn from the distribution and fed to the decoder.
Since it cannot...