Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Generative Adversarial Networks Projects

You're reading from   Generative Adversarial Networks Projects Build next-generation generative models using TensorFlow and Keras

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher Packt
ISBN-13 9781789136678
Length 316 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Kailash Ahirwar Kailash Ahirwar
Author Profile Icon Kailash Ahirwar
Kailash Ahirwar
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Introduction to Generative Adversarial Networks FREE CHAPTER 2. 3D-GAN - Generating Shapes Using GANs 3. Face Aging Using Conditional GAN 4. Generating Anime Characters Using DCGANs 5. Using SRGANs to Generate Photo-Realistic Images 6. StackGAN - Text to Photo-Realistic Image Synthesis 7. CycleGAN - Turn Paintings into Photos 8. Conditional GAN - Image-to-Image Translation Using Conditional Adversarial Networks 9. Predicting the Future of GANs 10. Other Books You May Enjoy

Problems with training GANs

As with any technology, there are some problems associated with GANs. These problems are generally to do with the training process and include mode collapse, internal covariate shifts, and vanishing gradients. Let's look at these in more detail.

Mode collapse

Mode collapse is a problem that refers to a situation in which the generator network generates samples that have little variety or when a model starts generating the same images. Sometimes, a probability distribution is multimodal and very complex in nature. This means that it might contain data from different observations and that it might have multiple peaks for different sub-graphs of samples. Sometimes, GANs fail to model a multimodal probability distribution of data and suffer from mode collapse. A situation in which all the generated samples are virtually identical is known as complete collapse.

There are many methods that we can use to overcome the mode collapse problem. These include the following:

  • By training multiple models (GANs) for different modes

  • By training GANs with diverse samples of data

Vanishing gradients

During backpropagation, gradient flows backward, from the final layer to the first layer. As it flows backward, it gets increasingly smaller. Sometimes, the gradient is so small that the initial layers learn very slowly or stop learning completely. In this case, the gradient doesn't change the weight values of the initial layers at all, so the training of the initial layers in the network is effectively stopped. This is known as the vanishing gradients problem.

This problem gets worse if we train a bigger network with gradient-based optimization methods. Gradient-based optimization methods optimize a parameter's value by calculating the change in the network's output when we change the parameter's value by a small amount. If a change in the parameter's value causes a small change in the network's output, the weight change will be very small, so the network stops learning.

This is also a problem when we use activation functions, such as Sigmoid and Tanh. Sigmoid activation functions restrict values to a range of between 0 and 1, converting large values of x to approximately 1 and small or negative values of x to approximately zero. The Tanh activation function squashes input values to a range between -1 and 1, converting large input values to approximately 1 and small values to approximately minus 1. When we apply backpropagation, we use the chain rule of differentiation, which has a multiplying effect. As we reach the initial layers of the network, the gradient (the error) decreases exponentially, causing the vanishing gradients problem.

To overcome this problem, we can use activation functions such as ReLU, LeakyReLU, and PReLU. The gradients of these activation functions don't saturate during backpropagation, causing efficient training of neural networks. Another solution is to use batch normalization, which normalizes inputs to the hidden layers of the networks.

Internal covariate shift

An internal covariate shift occurs when there is a change in the input distribution to our network. When the input distribution changes, hidden layers try to learn to adapt to the new distribution. This slows down the training process. If a process slows down, it takes a long time to converge to a global minimum. This problem occurs when the statistical distribution of the input to the networks is drastically different from the input that it has seen before. Batch normalization and other normalization techniques can solve this problem. We will explore these in the following sections.

You have been reading a chapter from
Generative Adversarial Networks Projects
Published in: Jan 2019
Publisher: Packt
ISBN-13: 9781789136678
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime