In Chapter 9, Variational Autoencoders, we discussed VAEs as a mechanism for dimensionality reduction that aims to learn the parameters of the distribution of the input space, and effect reconstruction based on random draws from the latent space using the learned parameters. This offered a number of advantages we already discussed in Chapter 9, Variational Autoencoders, such as the following:
- The ability to reduce the effect of noisy inputs, since it learns the distribution of the input, not the input itself
- The ability to generate samples by simply querying the latent space
On the other hand, GANs can also be used to generate samples, like the VAE. However, the learning of both is quite different. In GANs, we can think of the model as having two major parts: a critic and a generator. In VAEs, we also have two networks: an encoder and a decoder.
If we were to make any connection between the two, it would be that the decoder and generator play a very similar...