Generative Adversarial Networks (GANs) are a new, trendy type of network. Their main attraction is the Generative side. This means that we can train a network to generate a new sample of data that is similar to a reference.
A few years ago, researchers used Deep Belief Networks (DBN) for this task, consisting of a visible layer and then a set of internal layers that ended up recurrent. Training such networks was quite difficult, so people thought about new architectures.
Enter our GAN. How can we train a network to generate samples that are similar to a reference? First, we need to design a generator network. Typically, we need a set of random variables that will be fed inside a set of dense and conv2d_transpose layers. The latter do the opposite of the conv2d layer, going from an input that looks like a convoluted output to an output...