Advanced tuning and optimization techniques
At the end of the previous chapter, we discussed LLMs and how they are trained and tuned. I mentioned some of the tuning approaches at a high level, and in this section, we will dive deeper into how we can tune LLMs to more adequately address our specific needs. Let’s set the stage by outlining how we interact with LLMs in the first place, which we generally do via prompts.
Definition
A prompt is a piece of text or instruction that we provide to an LLM to guide its response or output. It tells the LLM what to do and, in some cases, provides guidance on how to do it; for example, “summarize this financial document, specifically focusing on details relating to company performance in
Q4, 2023
.”
The first LLM tuning technique we’ll explore is prompt engineering.
Prompt engineering
Prompts are the most straightforward method we can use to tune an LLM’s outputs to our specific needs. In fact, during...