Prompting a foundation model
LLMs can be used directly, for example, for such tasks as summarization, question answering, and reasoning. Due to the very large amounts of data on which they were trained, they can answer very well to a variety of questions on many subjects, since they have the context available in that training dataset.
In many practical cases, such LLMs can correctly answer our questions on the first attempt. In other cases, we will need to provide a few clarifications or examples. The quality of the answers in these zero-shot or few-shot approaches highly depends on the ability of the user to craft the prompts for LLM. In this section, we will show the simplest way to interact with one LLM on Kaggle, using prompts.
Model evaluation and testing
Before starting to use an LLM on Kaggle, we will need to perform a few preparation steps. We begin by loading the model and then defining a tokenizer. Next, we create a model pipeline. In our first code example,...