TL with BERT and GPT
Having grasped the fundamental concepts of pre-trained models and TL, it’s time to put theory into practice. It’s one thing to know the ingredients; it’s another to know how to mix them into a delicious dish with them. In this section, we will take some models that have already learned a lot from their pre-training and fine-tune them to perform a new, related task. This process involves adjusting the model’s parameters to better suit the new task, much like fine-tuning a musical instrument:
Figure 12.8 – ITL
ITL takes a pre-trained model that was generally trained on a semi-supervised (or unsupervised) task and then is given labeled data to learn a specific task.
Examples of TL
Let’s take a look at some examples of TL with specific pre-trained models.
Example – Fine-tuning a pre-trained model for text classification
Consider a simple text classification problem. Suppose we...