Running an LLM to follow instructions
In this recipe, we will learn how to get an LLM to follow instructions via prompting. An LLM can be provided some context and asked to generate text based on that context. This is a very novel feature of an LLM. The LLM can be specifically instructed to generate text based on explicit user requirements. Using this feature expands the breadth of use cases and applications that can be developed. The context and the question to be answered can be generated dynamically and used in various use cases ranging from answering simple math problems to sophisticated data extraction from knowledge bases.
We will use the meta-llama/Meta-Llama-3.1-8B-Instruct
model for this recipe. This model is built on top of the meta-llama/Meta-Llama-3.1-8B
model and has been tuned to follow instructions via prompts.
Getting ready
It is required that the user create the necessary credentials on the Hugging Face site to ensure that the model is available to be used...