How does LangChain work?
The LangChain framework simplifies building sophisticated LLM applications by providing modular components that facilitate connecting language models with other data and services. The framework organizes capabilities into modules spanning from basic LLM interaction to complex reasoning and persistence.
These components can be combined into pipelines also called chains that sequence the following actions:
- Loading documents
- Embedding for retrieval
- Querying LLMs
- Parsing outputs
- Writing memory
Chains match modules to application goals, while agents leverage chains for goal-directed interactions with users. They repeatedly execute actions based on observations, plan optimal logic chains, and persist memory across conversations.
The modules, ranging from simple to advanced, are:
- LLMs and chat models: Provide interfaces to connect and query language models like GPT-3. Support async, streaming, and batch...