Fine-tuning
Fine-tuning becomes an option when the responses produce an acceptable result or when the prompt design does not meet expectations. Or not! In Chapter 11, Leveraging LLM Embeddings as an Alternative to Fine-Tuning, we saw that advanced prompt engineering leveraging OpenAI LLM Ada’s embeddings could produce good results.
So, what should we do? Prompt design by crafting good prompts with a ready-to-use model? Prompt engineering with an embedding model? Fine-tune a model to fit our needs?
Each of these choices comes with a cost. The best empirical method in computer science remains to:
- Rely on a reliable and optimized (volume, quality) evaluation dataset.
- Test different models and approaches. In this case, evaluate the outputs obtained through prompt design, engineering, and fine-tuning.
- Evaluate the risks and costs.
Like Amazon Web Services (AWS), Microsoft Azure, IBM Cloud, and others, Google Cloud provides a solid, simplified...