In this section, we will discuss the different parameter values that are used for training the DiscoGAN. These are presented in the following table:
Parameter name | Variable name and Value set | Rationale |
Learning rate for Adam optimizer |
self.l_r = 2e-4 |
We should always train a GAN network with a low learning rate for better stability and a DiscoGAN is no different. |
Decay rates for Adam optimizer |
self.beta1 = 0.5 self.beta2 = 0.99 |
The parameter beta1 defines the decaying average of gradients, while the parameter beta2 defines the decaying average of the square of the gradients. |
Epochs |
self.epoch = 200 |
200 epochs is good enough for the convergence of the DiscoGAN network in this implementation. |
Batch size |
self.batch_size = 64 |
A batch size of 64 works well for this implementation. However, because of resource constraint... |