Fine-tuning tips
You must care for and feed the fine-tuned set to improve training quality (based on the metrics or experience with certain test cases). Here is a summary of OpenAI’s suggestions for fine-tuning:
- Review existing examples for issues: You might have introduced style, logic, or grammar issues into the dataset, including examples with errors. Review the material against how the model performed before and after adding the data. You can use the epoch checkpoints as a tool.
- Gather more examples to fill the gaps: Additional training examples might show the model how to address gaps in its abilities. It is always hard to say how much is too much.
- Include examples with errors: Sometimes, it is best to learn from the master. Let’s ask ChatGPT about including mistakes in fine-tuned examples:
Should fine-tuning examples include intentional errors that might be expected from real customers? Yes, it's beneficial to include intentional errors in fine...