Building an image caption generator using PyTorch
For this exercise, we will be using the Common Objects in Context (COCO) dataset (available at http://cocodataset.org/#overview), which is a large-scale object detection, segmentation, and captioning dataset.
This dataset consists of over 200,000 labeled images with five captions for each image. The COCO dataset emerged in 2014 and has helped significantly in the advancement of object recognition-related computer vision tasks. It stands as one of the most commonly used datasets for benchmarking tasks such as object detection, object segmentation, instance segmentation, and image captioning.
In this exercise, we will use PyTorch to train a CNN-LSTM model on this dataset and use the trained model to generate captions for unseen samples. Before we do that, though, there are a few pre-requisites that we need to carry out.
Note
We will be referring to only the important snippets of code for illustration purposes. The full exercise...