Summary
In this chapter, we walked through four distinct ways of installing LangChain and other libraries needed in this book as an environment. Then, we introduced several providers of models for text and images. For each of them, we explained where to get the API token, and demonstrated how to call a model.
Finally, we developed an LLM app for text categorization (intent classification) and sentiment analysis in a use case for customer service. This showcases LangChain’s ease in orchestrating multiple models to create useful applications. By chaining together various functionalities in LangChain, we can help reduce response times in customer service and make sure answers are accurate and to the point.
In Chapter 4, Building Capable Assistants and Chapter 5, Building a Chatbot Like ChatGPT, we’ll dive more into use cases such as question answering in chatbots through augmentation with tools and retrieval.