Finding your best base model
At this point in the book, you should have learned how to pick your use case, how to find a dataset, and how to compare that with research datasets. You should have particularly learned how to compare that dataset with those available in the open source community. Now comes the fun part: picking your model!
Most likely, you already have a few candidates in mind. If you’re working with natural language, you’re probably thinking about something in the family of Generative Pretrained Transformers (GPT) for a generative use case, BERT for classification, or T5 for something akin to translation. For vision, you may be looking at CoCa (1), CLIP (2), or a jointly masked vision and language model (3). For multimodal datasets, you might pick one straight from the vision examples or something much more unique based on your specific use case.
In Chapter 1, An Introduction to Pretraining Foundation Models, we briefly introduced some of these state...