RAG vocabulary
Now is as good a time as any to review some vocabulary that should help you become familiar with the various concepts in RAG. In the following subsections, we will familiarize ourselves with some of this vocabulary, including LLMs, prompting concepts, inference, context windows, fine-tuning approaches, vector databases, and vectors/embeddings. This is not an exhaustive list, but understanding these core concepts should help you understand everything else we will teach you about RAG in a more effective way.
LLM
Most of this book will deal with LLMs. LLMs are generative AI technologies that focus on generating text. We will keep things simple by concentrating on the type of model that most RAG pipelines use, the LLM. However, we would like to clarify that while we will focus primarily on LLMs, RAG can also be applied to other types of generative models, such as those for images, audio, and videos. We will focus on these other types of models and how they are used...