Summary
This chapter led us to the potential of adapting an OpenAI model to our needs through fine-tuning. The process requires careful data analysis and preparation. We must also determine if fine-tuning using OpenAI's platform does not violate our privacy, confidentiality, and security requirements.We first built a fine-tuning process for a completion(generative) task by loading a pre-processed dataset of Immanuel Kant's Critique of Pure Reason. We submitted it to OpenAI's data preparation tool. The tool converted our data into JSONL. An ada model was fine-tuned and stored. We then ran the model.Then the babbage-002
model was fine-tuned for a classification (discriminative) task. This process brought us back to square one: can a standard OpenAI model achieve the same results as a fine-tuned model? If so, why bother fine-tuning a model?To satisfy our scientific curiosity, we ran davinci
on the same task as the trained ada
to classify a text to determine if it was about...