Chapter 1, Introduction to Generative Adversarial Networks, starts with the concepts of GANs. Readers will learn what a discriminator is, what a generator is, and what Game Theory is. The next few topics will cover the architecture of a generator, the architecture of a discriminator, objective functions for generators and discriminators, training algorithms for GANs, Kullback–Leibler and Jensen–Shannon Divergence, evaluation matrices for GANs, different problems with GANs, the problems of vanishing and exploding gradients, Nash equilibrium, batch normalization, and regularization in GANs.
Chapter 2, 3D-GAN – Generating Shapes Using GANs, starts with a short introduction to 3D-GANs and various architectural details. In this chapter, we will train a 3D-GAN to generate real-world 3D shapes. We write code for collecting a 3D Shapenet dataset, cleaning it, and making it training ready. Then, we will write code for a 3D-GAN with the Keras deep learning library.
Chapter 3, Face Aging Using Conditional GAN, introduces Conditional Generative Adversarial Networks (cGANs) and Age-cGAN to the readers. We will learn different steps in data preparation, such as downloading, cleaning, and formatting data. We will be using the IMDb Wiki Images dataset. We will write code for an Age-cGAN using the Keras framework. Next, we will train the network on the IMDb Wiki Images dataset. Finally, we will generate images using our trained model with age as our conditional argument, and our trained model will generate images for a person's face at different ages.
Chapter 4, Generating Anime Characters Using DCGANs, starts with an introduction to DCGANs. We will learn different steps in data preparation, such as gathering the anime characters dataset, cleaning the dataset, and preparing it for training. We will cover the Keras implementation of a DCGAN inside a Jupyter Notebook. Next, we will learn different ways to train the DCGAN and choose different hyper-parameters for it. Finally, we will generate anime characters using our trained model. Also, we will discuss practical applications of DCGANs.
Chapter 5, Using SRGANs to Generate Photo-Realistic Images, explains how to train an SRGAN to generate photo-realistic images. The first step in the training process is to gather the dataset, followed by cleaning it and formatting it for training. Readers will learn where to gather the dataset from, how to clean it, and how to get it into a format that is ready for training.
Chapter 6, StackGAN – Text to Photo-Realistic Image Synthesis, this chapter will start with an introduction to StackGAN. Data collection and data preparation are important steps, and we will learn the process of gathering the dataset, cleaning the dataset, and formatting it ready for training. We will write the code for a StackGAN in Keras inside a Jupyter Notebook. Next, we will train the network on the CUB dataset. Finally, after we finish training the model, we will generate photo-realistic images from the text descriptions. We will discuss different industry applications of StackGANs and how to deploy them in production.
Chapter 7, CycleGAN – Turn Paintings into Photos, explains how to train a CycleGAN to turn paintings into photos. We will start with an introduction to CycleGANs and look at their different applications. We will cover different data gathering, data cleaning, and data formatting techniques. Next, we will write the Keras implementation of the CycleGAN and get a detailed explanation of the code in Jupyter Notebook. We will train the CycleGAN on the dataset that we have prepared. We will test our trained model to turn paintings into photos. Finally, we look at practical applications of CycleGANs.
Chapter 8, Conditional GAN – Image-to-Image Translation Using Conditional Adversarial Networks, covers how to train a conditional GAN for image-to-image translation. We will start with an introduction to conditional GANs and different data preparation techniques, such as data gathering, data cleaning, and data formatting. Next, we will write the code for the conditional GAN in Keras inside Jupyter Notebook. Next, we learn how to train the conditional GAN on the dataset that we have prepared. We will explore different hyper-parameters for training. Finally, we will test the conditional GAN and will discuss different use cases of image-to-image translation in real-world applications.
Chapter 9, Predicting the Future of GANs, is the final chapter. After covering the fundamentals of GANs and going through six projects, this chapter will give readers a glimpse into the future of GANs. Here, we will look at how, in the last 3-4 years, the adoption of GANs has been phenomenal and how well the industry has accepted it. I will also discuss my personal views on the future of GANs.