We just learned how GANs are used to generate images. Least Squares GAN (LSGAN) is another simple variant of a GAN. As the name suggests, here, we use the least square error as a loss function instead of sigmoid cross-entropy loss. With LSGAN, we can improve the quality of images being generated from the GAN. But how can we do that? Why do the vanilla GANs generate poor quality images?
If you can recollect the loss function of GAN, we used sigmoid cross-entropy as the loss function. The goal of the generator is to learn the distribution of the images in the training set, that is, real data distribution, map it to the fake distribution, and generate fake samples from the learned fake distribution. So, the GANs try to map the fake distribution as close to the true distribution as possible.
But once the fake samples are on the correct side of the decision surface...