Reviewing GANs
Apart from PixelCNN, which we covered in Chapter 1, Getting Started with Image Generation Using TensorFlow, which is a CNN, all the other generative models we have learned about are based on (variational) autoencoders or generative adversarial networks (GANs). Strictly speaking, a GAN is not a network but a training method that makes use of two networks – a generator and a discriminator. I tried to fit a lot of content into this book; so, the information can be overwhelming. We will now go over a summary of the important techniques we have learned, by grouping them into the following categories:
- Optimizer and activation functions
- Adversarial loss
- Auxiliary loss
- Normalization
- Regularization
Optimizer and activation functions
Adam is the most popular optimizer in training GANs, followed by RMSprop. Typically, the first moment in Adam is set to 0
and the second moment is set to 0.999
. The learning rate for the generator is set to...