Implementing StyleGAN
ProGAN is great at generating high-resolution images by growing the network progressively, but the network architecture is quite primitive. The simple architecture resembles earlier GANs such as DCGAN that generate images from random noise but without fine control over the images to be generated.
As we have seen in previous chapters, many innovations happened in image-to-image translation to allow better manipulation of the generator outputs. One of them is the use of the AdaIN layer (Chapter 5, Style Transfer) to allow style transfer, mixing the content and style features from two different images. StyleGAN adopts this concept of style-mixing to come out with a style-based generator architecture for generative adversarial networks – this is the title of the paper written for FaceBid. The following figure shows that StyleGAN can mix the style features from two different images to generate a new one: