In the previous section, we tweaked the input image's pixels slightly. In this section, we will tweak the input image a little more so that we can come up with an image that is still of the same object, however a little more artistic than the original one. This algorithm forms the backbone of style-transfer techniques using neural networks.
Let's go through the intuition of how DeepDream works.
We will pass our image through a pre-trained model (VGG19, in this example). We already learned that, depending on the input image, certain filters in the pre-trained model activate the most and certain filters activate the least.
We will supply the layers of neural network that we want to activate the most.
The neural network adjusts the input pixel values until we obtain the maximum value of the chosen layers.
However, we will also ensure...