Summary
In this chapter, we walked through four distinct ways of installing LangChain and other libraries needed in this book as an environment. Then, we introduced several providers of models for text and images. For each of them, we explained where to get the API token, and demonstrated how to call a model. We then went through the main building blocks in LangChain to interact with models emphasizing the adaptability of using a common API, allowing for straightforward transitions between different LLM providers without significant alterations to the solution’s codebase. We additionally went over examples with Anthropic’s Claude 2 and 3, with Gemini Pro, and a few models on Hugging Face including Mistral as well as OpenAI’s GPT-4. We also ran models locally with llama.cpp
and GPT4All aside from Hugging Face.
Finally, we developed an LLM app for text categorization (intent classification) and sentiment analysis in a use case for customer service. I hope it...