Summary
In this chapter, we covered the evolution of styled-based generative models. It all started with neural style transfer, where we learned that the image can be disentangled into content and style. The original algorithm was slowed and the iterative optimization process in inference time replaced with a feed-forward style transfer that could perform style transfer in real time.
We then learned that the Gram matrix is not the only method for representing style, and that we could use the layers' statistics instead. As a result, normalization layers have been explored to control the style of an image, which eventually led to the creation of AdaIN. By combing a feed-forward network and AdaIN, we implemented arbitrary style transfer in real time.
With the success in style transfer, AdaIN found its way into GANs. We went over the MUNIT architecture in detail in terms of how AdaIN was used for multimodal image generation. There is a style-based GAN that you should be familiar...