Similar to tricks for training neural networks, there are a few sources that provide best practices for training generative adversarial networks. These best practices were mainly developed to circumvent the difficulty in training GANs using the objective function described in 2.4. Note that these tricks might not apply nor be necessary to other GAN formulations such as LSGAN or WGAN.
Some of the problems associated with the original GAN objective function seem to have been addressed with the development of relativistic loss functions like the Least-Squares GAN (LSGAN) and the Wasserstein GAN (WGAN).
We present these different algorithms and loss functions, recommending that you study them in tandem with Google's recent research in Are GANs Created Equal. In this paper, while referring to different GAN loss functions and algorithms, the authors...