Semi-supervised learning
Last but not least, such generative adversarial networks can be used to enhance supervised learning itself.
Suppose the objective is to classify K classes, for which an amount of labeled data is available. It is possible to add some generated samples to the dataset, which come from a generative model, and consider them as belonging to a (K+1)th class, the fake data class.
Decomposing the training cross-entropy loss of the new classifier between the two sets (real data and fake data) leads to the following formula:
Here, is the probability predicted by the model:
Note that if we know that the data is real:
And training on real data (K classes) would have led to the loss:
Hence the loss of the global classifier can be rewritten:
The second term of the loss corresponds to the standard unsupervised loss for GAN:
The interaction introduced between the supervised and the unsupervised loss is still not well understood but, when the classification is not trivial, an unsupervised...