Summary
In this chapter, we applied the concept of generative machine learning to images by generating an image that contains the content of one image and the style of another – a task known as neural style transfer. First, we understood the idea behind the style transfer algorithm, especially the use of the gram matrix in order to extract styles from an image.
Next, we used PyTorch to build our own neural style transfer model. We used parts of a pre-trained VGG19 model to extract content and style information through some of its convolutional layers. We replaced the max pooling layers of the VGG19 model with average pooling layers for a smooth gradient flow. We then input a random initial image to the style transfer model and with the help of a style and a content loss, we fine-tuned the image pixels using gradient descent.
This input image evolves over epochs and gives us the final generated image, which contains the content of the content image and style of the style...