Summary
In this chapter, you were introduced to a new class of generative models called Generative Adversarial Networks. Inspired by concepts of game theory, GANs present an implicit method of modeling the data generation probability density. We started the chapter by first placing GANs in the overall taxonomy of generative models and comparing how these are different from some of the other methods we have covered in earlier chapters. Then we moved onto understanding the finer details of how GANs actually work by covering the value function for the minimax game, as well as a few variants like the non-saturating generator loss and the maximum likelihood game. We developed a multi-layer-perceptron-based vanilla GAN to generate MNIST digits using TensorFlow Keras APIs.
In the next section, we touched upon a few improved GANs in the form of Deep Convolutional GANs, Conditional GANs, and finally, Wasserstein GANs. We not only explored major contributions and enhancements, but also...