3. Using the fine-tuned OpenAI model
We are now ready to use our fine-tuned OpenAI GPT-4o-mini
model. We will begin by defining a prompt based on a question taken from our initial dataset:
# Define the prompt
prompt = "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?"
The goal is to verify whether the dataset has been properly trained and will produce results similar to the completions we defined. We can now run the fine-tuned model:
# Assume first_non_empty_model is defined above this snippet
if generation==True:
response = client.chat.completions.create(
model=first_non_empty_model,
temperature=0.0, # Adjust as needed for variability
messages=[
{"role": "system", "content": "Given a question, reply with a complete explanation for students."},
...