Video data exploration
In this section, we will visualize a few samples of files, and then we will begin performing object detection to try to capture the features from the images that might have some anomalies when processed to create deepfakes. These are mostly the eyes, mouths, and figures.
We will start by visualizing sample files, both genuine images and deepfakes. We will then apply the first algorithm introduced previously for face, eye, and mouth detection, the one based on Haar cascade. We then follow with the alternative algorithm, based on MTCNN.
Visualizing sample files
The following code block selects a few video files from the set of fake videos and then visualizes an image capture from them, using the display_image_from_video
function from the utility script video_utils
:
fake_train_sample_video = list(meta_train_df.loc[meta_train_df.label=='FAKE'].sample(3).index)
for video_file in fake_train_sample_video:
display_image_from_video(os.path...