Once we have downloaded the data from the specified location, the next task is to process the video image frames to extract features out of the last fully connected layers of a pre-trained convolutional neural network. We use a VGG16 convolutional neural network that is pre-trained on ImageNet. We take the activations out of the last fully connected layer of the VGG16. Since the last fully connected layer of VGG16 has 4096 units, our feature vector ft for each time step t is a 4096, dimensional vector that is ft ∈ R4096 .
Before the images from the videos can be processed through the VGG16, they need to be sampled from the video. We sample images from the video in such a way that each video has 80 frames. After processing the 80 image frames from VGG16, each video will have 80 feature vectors f1, f2, . . . . . ft . . . f80...