Fine-tuning GPT-3
This section shows how to fine-tune GPT-3 to learn logic. Transformers need to learn logic, inferences, and entailment to understand language at a human level.
Fine-tuning is the key to making GPT-3 your own application, to customizing it to make it fit the needs of your project. It’s a ticket to AI freedom to rid your application of bias, teach it things you want it to know, and leave your footprint on AI.
In this section, GPT-3 will be trained on the works of Immanuel Kant using kantgpt.csv
. We used a similar file to train the BERT-type model in Chapter 4, Pretraining a RoBERTa Model from Scratch.
Once you master fine-tuning GPT-3, you can use other types of data to teach it specific domains, knowledge graphs, and texts.
OpenAI provides an efficient, well-documented service to fine-tune GPT-3 engines. It has trained GPT-3 models to become different types of engines, as seen in the The rise of billion-parameter transformer models...