In the train_network function, we first define the optimizers for both the generator and the discriminator loss functions. We use the Adam optimizer for both the generators and the discriminators, since this is an advanced version of the stochastic gradient descent optimizer that works really well in training GANs. Adam uses a decaying average of gradients, much like momentum for steady gradient, and a decaying average of squared gradients that provides information about the curvature of the cost function. The variables pertaining to the different losses defined by tf.summary are written to the log files and can therefore be monitored through TensorBoard. The following is the detailed code for the train function:
def train_network(self):
self.learning_rate = tf.placeholder(tf.float32)
self.d_optimizer = tf.train.AdamOptimizer...