Introduction to iGAN
We are now familiar with using generative models such as pix2pix (see Chapter 4, Image-to-Image Translation)to generate images from sketch or segmentation masks. However, as most of us are not skilled artists, we are only able to draw simple sketches, and as a result, our generated images also have simple shapes. What if we could use a real image as input and use sketches to change the appearance of the real image?
In the early days of GANs, a paper titled Generative Visual Manipulation on the Natural Image Manifold by J-Y. Zhu (inventor of CycleGAN) et al. was published that explored how to use a learned latent representation to perform image editing and morphing. The authors made a website, http://efrosgans.eecs.berkeley.edu/iGAN/, that contains videos that demonstrate a few of the following use cases:
- Interactive image generation: This involves generating images from sketches in real time, as shown here: