Next comes the training function. Yes, it is a big one. Yet, as you will soon see, it is quite intuitive, and basically combines everything we have implemented so far:
def train( g_learning_rate, # learning rate for the generator g_beta_1, # the exponential decay rate for the 1st moment estimates in Adam optimizer d_learning_rate, # learning rate for the discriminator d_beta_1, # the exponential decay rate for the 1st moment estimates in Adam optimizer leaky_alpha, init_std, smooth=0.1, # label smoothing sample_size=100, # latent sample size (i.e. 100 random numbers) epochs=200, batch_size=128, # train batch size eval_size=16): # evaluate size # labels for the batch size and the test size y_train_real, y_train_fake = make_labels(batch_size) y_eval_real, y_eval_fake...