Summary
This chapter discussed the concept of combining a CNN model and an LSTM model in an encoder-decoder framework, jointly training them, and using the combined model to generate captions for an image. We first described what the model architecture for such a system would look like and how minor changes to the architecture could lead to solving different applications, such as activity recognition and video description. We also explored what building a vocabulary for a text dataset means in practice.
In the second and final part of this chapter, we actually implemented an image captioning system using PyTorch. We downloaded datasets, wrote our own custom PyTorch dataset loader, built a vocabulary based on the caption text dataset, and applied transformations to images, such as reshaping, normalizing, random cropping, and horizontal flipping. We then defined the CNN-LSTM model architecture, along with the loss function and optimization schedule, and finally, we ran the training...