Code lab 9.1 – ragas
Retrieval-augmented generation assessment (ragas) is an evaluation platform designed specifically for RAG. In this code lab, we will step through the implementation of ragas in your code, generating a synthetic ground truth, and then establishing a comprehensive set of metrics that you can integrate into your RAG system. But evaluation systems are meant to evaluate something, right? What will we evaluate in our code lab?
If you remember in Chapter 8, we introduced a new search method for our retrieval stage called hybrid search. In this code lab, we will both implement the original dense vector semantic-based search and then use ragas to evaluate the impact of using the hybrid search method. This will give you a real-world working example of how a comprehensive evaluation system can be implemented in your own code!
Before we dive into how to use ragas, it is important to note that it is a highly evolving project. New features and API changes are happening...