Defining your LLM
With the prompt template selected, we can select an LLM, a central component for any RAG application. The following code shows the LLM model as the next chain link in rag_chain
:
rag_chain = ( {"context": retriever | format_docs, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() )
As discussed previously, the output of the previous step, which was the prompt
object, is going to be the input of the next step, the LLM. In this case, the prompt will pipe right into the LLM with the prompt we generated in the previous step.
Above rag_chain
, we define the LLM we want to use:
llm = ChatOpenAI(model_name="gpt-4o", temperature=0)
This is creating an instance of the ChatOpenAI
class from the langchain_openai
module, which serves as an interface to OpenAI’s language models, specifically...