The model that won the first MSCOCO Image Captioning Challenge in 2015 is described in the paper, Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge (https://arxiv.org/pdf/1609.06647.pdf). Before we talk about the training process, which is also covered pretty well in TensorFlow's im2txt model documentation website at https://github.com/tensorflow/models/tree/master/research/im2txt, let's first get a basic understanding of how the model works. This will also help you understand training and inference code in Python, as well as the inference code in iOS and Android you'll see later in the chapter.
The winning Show and Tell model is trained using an end-to-end method, similar to the latest deep learning-based speech recognition models we covered briefly in the previous chapter. It uses the MSCOCO image...