1. Wasserstein GAN
As we've mentioned before, GANs are notoriously hard to train. The opposing objectives of the two networks, the discriminator and the generator, can easily cause training instability. The discriminator attempts to correctly classify the fake data from the real data. Meanwhile, the generator tries its best to trick the discriminator. If the discriminator learns faster than the generator, the generator parameters will fail to optimize. On the other hand, if the discriminator learns more slowly, then the gradients may vanish before reaching the generator. In the worst case, if the discriminator is unable to converge, the generator is not going to be able to get any useful feedback.
WGAN argued that a GAN's inherent instability is due to its loss function, which is based on the Jensen-Shannon (JS) distance. In a GAN, the objective of the generator is to learn how to transform from one source distribution (for example, noise) to an estimated target...