List index query engine
Don’t think of ListIndex
as simply a list of nodes. The query engine will process the user input and each document as a prompt for an LLM. The LLM will evaluate the semantic similarity relationship between the documents and the query, thus implicitly ranking and selecting the most relevant nodes. LlamaIndex will filter the documents based on the rankings obtained, and it can also take the task further by synthesizing information from multiple nodes and documents.
We can see that the selection process with an LLM is not rule-based. Nothing is predefined, which means that the selection is prompt-based by combining the user input with a collection of documents. The LLM evaluates each document in the list independently, assigning a score based on its perceived relevance to the query. This score isn’t relative to other documents; it’s a measure of how well the LLM thinks the current document answers the question. Then, the top-k documents...