Another great advantage GANs have is that they can be conditioned by any kind of data. Conditional GANs (cGANs) can be trained to model the conditional distribution, p(x|y), that is, to generate images conditioned by a set of input values, y (refer to the introduction to generative models). The conditional input, y, can be an image, a categorical or continuous label, a noise vector, and more, or any combination of those.
In conditional GANs, the discriminator is edited to receive both an image, x (real or fake), and its corresponding conditional variable, y, as a paired input (that is, D(x, y)). Though its output is still a value between 0 and 1 measuring how real the input seems, its task is slightly changed. To be considered as real, an image should not only look as if drawn from the training dataset; it should also correspond to its paired variable.
Imagine, for instance, that we want to train a generator, G, to create images of handwritten...