In this chapter, we investigated numerical properties of samples produced with adversarial methods, especially Generative Adversarial Networks. We showed that fake samples have properties that are barely noticed within visuals of samples, namely the fact that, due to stochastic gradient descent and the requirements of differentiability, fake samples smoothly approximate the dominating modes of the distribution. We analyzed statistical measures of divergence between real data and other data, and the results showed that even in simple cases – for instance, distribution of pixel intensities – the divergence between training data and fake data is large with respect to test data.
Although not common practice, one could possibly circumvent the difference in support between the real and fake data by training Generators that explicitly sample a distribution that...