Understanding generative models
An unsupervised learning model that learns the underlying data distribution of the training set and generates new data that may or may not have variations is commonly known as a generative model. Knowing the true underlying distribution might not always be a possibility, hence the neural network trains on a function that tries to be as close a match as possible to the true distribution.
The most common methods used to train generative models are as follows:
- Variational autoencoders:Â A high dimensional input image is encoded by an auto-encoder to create a lower dimensional representation. During this process, it is of the utmost importance to preserve the underlying data distribution. This encoder can only be used to map to the input image using a decoder and cannot introduce any variability to generate similar images. The VAE introduces variability by generating constrained latent vectors that still follow the underlying distribution. Though VAEs help in creating...