Start working with LLMs via Hugging Face Hub
Now that we got familiar with LangChain components, it is time to start using our LLMs. If you want to use open-source LLMs, leveraging the Hugging Face Hub integration is extremely versatile. In fact, with just one access token you can leverage all the open-source LLMs available in Hugging Face’s repos. As it being a non-production scenario, I will be using the free Inference API, however if you are meant to build production-ready applications, you can easily scale to the Inference Endpoint, which grants you a dedicated and fully managed infrastructure to host and consume your LLMs.So let’s see how to start integrating LangChain with Hugging Face Hub.
Create an Hugging Face user access token
To access the free Inference API, you will need an user access token, the credential thal allows you to run the service. Below you can find the steps to activate the user access token:
Create an Hugging Face account. You can create an Hugging...