Harnessing the power of LLMs with LangChain
LLMs are powerful tools, yet they have some limitations. One of them is the context window length. For example, the maximum input sequence of Llama 2 is 4,096 tokens and even less in terms of words. As a reference, most of the chapters in this book hover around 10,000 words. Many tasks wouldn’t fit this length. Another LLM limitation is that its entire knowledge is stored within the model weights at training time. It has no direct way to interact with external data sources, such as databases or service APIs. Therefore, the knowledge can be outdated or insufficient. The LangChain framework can help us alleviate these issues. It does so with the following modules:
- Model I/O: The framework differentiates between classic LLMs and chat models. In the first case, we can prompt the model with a single prompt, and it will generate a response. The second case is more interactive – it presumes a back-and-forth communication between...