Optimizing language models – the symbiosis of fine-tuning, RAG, and LlamaIndex
In the previous chapter, we saw that vanilla LLMs have some limitations right outside of the box. Their knowledge is static and they occasionally spit out nonsense. We also learned about RAG as a potential way to mitigate these issues. Blending prompt engineering techniques with programmatic methods, RAG can elegantly solve many of the LLM shortcomings.
What is prompt engineering?
Prompt engineering involves crafting text inputs designed to be effectively processed by a generative AI (GenAI) model. Composed in natural language, these prompts describe the specific tasks to be carried out by the AI. We’ll have a much deeper conversation on this topic during Chapter 10, Prompt Engineering Guidelines and Best Practices.
Is RAG the only possible solution?
Of course not. Another approach is to fine-tune the AI model, which involves additional training on proprietary data to adapt the...