GAN implementation in Keras
In the previous section, we learned that the principles behind GANs are straightforward. We also learned how GANs could be implemented by familiar network layers such as CNNs and RNNs. What differentiates GANs from other networks is they are notoriously difficult to train. Something as simple as a minor change in the layers can drive the network to training instability.
In this section, we'll examine one of the early successful implementations of GANs using deep CNNs. It is called DCGAN [3].
Figure 4.2.1 shows DCGAN that is used to generate fake MNIST images. DCGANÂ recommends the following design principles:
Use of strides > 1 convolution instead of
MaxPooling2D
orUpSampling2D
. With strides > 1, the CNN learns how to resize the feature maps.Avoid using
Dense
layers. Use CNN in all layers. TheDense
layer is utilized only as the first layer of the generator to accept the z-vector. The output of theDense
layer is resized and becomes the input of the succeeding...