Fine-tuning 101
Think of fine-tuning as teaching the solution how to approach a problem. You are not telling it the exact answers. That is for RAG. You coach the LLM on approaching issues, thinking about the solution, and how it should respond. Even though specific examples are used in fine-tuning, don’t expect it to use that exact example ever. It is just an example. Imagine we need it to be like a science teacher, so the LLM is told in prompts to be a science teacher, but if it needs to sound like an 8th-grade science teacher, share examples of what it is expected to sound like. Then, when these examples are added to the models, compare them against output examples and decide whether they are doing a good job. We will do this work using fine-tuning in the ChatGPT playground, as shown in Figure 8.1.
Figure 8.1 – Fine-tuning in ChatGPT
We will walk through an example. This will give a feel for what is being built, how to contribute examples...