Fine-Tuning
What happens when prompt engineering efforts have gone as far as they can go? If higher quality results are still needed, examples are overwhelming the prompt, performance issues appear, or token costs are excessive because of a large prompt, fine-tuning comes into the picture.
As mentioned in the last chapter, solutions sometimes require overlapping approaches such as Retrieval-Augmented Generation (RAG), prompt engineering, and fine-tuning. Fine-tuning helps the model improve its understanding. We will focus on a few critical deliverables before contextualizing them by completing the Wove case study started in Chapter 6, Gathering Data – Content is King:
- Fine-tuning 101
- Creating fine-tuned models
- Fine-tuning tips
- Wove case study, continued
Regardless of the tools, the team must care and feed the large language model (LLM) to improve the output. Though the methods discussed in the book can reach limits, fine-tuning is another excellent...