Using OpenAI models instead of local ones
In this chapter, we used different models. Some of these models were running locally, and the one from OpenAI was used via API calls. We can utilize OpenAI models in all recipes. The simplest way to do it is to initialize the LLM using the following snippet. Using OpenAI models does not require any GPU and all recipes can be simply executed by using the OpenAI model as a service:
import getpass from langchain_openai import ChatOpenAI os.environ["OPENAI_API_KEY"] = getpass.getpass() llm = ChatOpenAI(model="gpt-4o-mini")
This completes our chapter on generative AI and LLMs. We have just scratched the surface of what is possible via generative AI; we hope that the examples presented in this chapter help illuminate the capabilities of LLMs and their relation to generative AI. We recommend exploring the LangChain site for updates and new tools and agents for their use cases and applying them in production scenarios following...