Customizing our RAG components
For starters, let’s talk about which components of a RAG workflow can be customized in LlamaIndex. The short answer is pretty much all of them, as we have seen already in the previous chapters. The fact that the framework itself is flexible and allows customization of all the core components is a definite advantage. But leaving aside the framework itself, the core of a RAG workflow is actually the LLM and the embedding model it uses. In all the examples given so far, we have used the default configuration of LlamaIndex – which is based on OpenAI models. But, as we already briefly discussed in Chapter 3, Kickstarting Your Journey with LlamaIndex, there are both good reasons and enough options available to choose other models – both commercial variants offered by established companies in this market, and open source models, which can be hosted locally, offering private alternatives, and substantially reducing the costs of a large-scale...