Improving DeepFakes with GANs
The output image of deepfake's autoencoders can be a little blurry, so how can we improve that? To recap, the deepfake algorithm can be broken into two main techniques – face image processing and face generation. The latter can be thought of as an image-to-image translation problem, and we learned a lot about that in Chapter 4, Image-to-Image Translation. Therefore, the natural thing to do would be to use a GAN to improve the quality. One helpful model is faceswap-GAN, and we will now go over a high-level overview of it. The autoencoder from the original deepfake is enhanced with residual blocks and self-attention blocks (see Chapter 8, Self-Attention for Image Generation) and used as a generator in faceswap-GAN. The discriminator architecture is as follows:

Figure 9.10 - faceswap-GAN's discriminator architecture (Redrawn from: https://github.com/shaoanlu/faceswap-GAN)
We can learn a lot about the discriminator...