In this chapter, we saw how to augment neural networks with randomness in a systematic manner, in order to make them output instances of what we humans deem creative. With VAEs, we saw how parameterized function approximation using neural networks can be used to learn a probability distribution, over a continuous latent space. We then saw how to randomly sample from such a distribution and generate synthetic instances of the original data. In the second part of the chapter, we saw how two networks can be trained in an adversarial manner for a similar task.
The methodology of training GANs is simply a different strategy for learning a latent space compared to their counterpart, the VAE. While GANs have some key benefits for the use case of synthetic image generation, they do have some downsides as well. GANs are notoriously difficult to train and often generate images from...