I've previously mentioned Soumith Chintala's GAN hacks Git (https://github.com/soumith/ganhacks), which is an excellent place to start when you're trying to make your GAN stable. Now that we've talked about how difficult it can be to train a stable GAN, let's talk about some of the safe choices that will likely help you succeed that you can find there. While there are quite a few hacks out there, here are my top recommendations that haven't been covered already in the chapter:
- Batch norm: When using batch normalization, construct different minibatches for both real and fake data and make the updates separately.
- Leaky ReLU: Leaky ReLU is a variation of the ReLU activation function. Recall the the ReLU function is
.
Leaky ReLU, however, is formulated as:
![](https://static.packt-cdn.com/products/9781788837996/graphics/assets/24a66098-c4cd-49c4-8322-9e65ae01d963.png)
Leaky ReLU allows very small, non-zero gradients when the unit isn&apos...