Variational autoencoders (VAEs) are built on the idea of the standard autoencoder, and are powerful generative models and one of the most popular means of learning a complicated distribution in an unsupervised fashion. VAEs are probabilistic models rooted in Bayesian inference. A probabilistic model is exactly as it sounds:
Probabilistic models incorporate random variables and probability distributions into the model of an event or phenomenon.
VAEs, and other generative models, are probabilistic in that they seek to learn a distribution that they utilize for subsequent sampling. While all generative models are probabilistic models, not all probabilistic models are generative models.
The probabilistic structure of VAEs comes into play with their encoders. Instead of building an encoder that outputs a single value to describe the input data, we want to learn...