Building a DeepFake model
The deep learning model used in the original deepfake is an autoencoder-based one. There are a total of two autoencoders, one for each face domain. They share the same encoder, so there is a total of one encoder and two decoders in the model. The autoencoders expect an image size of 64×64 for both the input and the output. Now, let's build the encoder.
Building the encoder
As we learned in the previous chapter, the encoder is responsible for converting high-dimensional images into a low-dimensional representation. We'll first write a function to encapsulate the convolutional layer; leaky ReLU activation is used for downsampling:
def downsample(filters): return Sequential([ Conv2D(filters, kernel_size=5, strides=2, padding='same'), LeakyReLU(0.1)])
In the usual autoencoder implementation, the output...