Building a chat bot with Ollama deployed to a self-hosted server
Ollama is a platform that lets you run pre-trained AI models on your own machine. What are the benefits? You avoid paying for cloud AI services, you don’t need to share your data with other companies, and you can potentially achieve lower latency. While Ollama mainly focuses on language processing models, it also supports models that can process and describe images.
There are many state-of-the-art models optimized for Ollama, so you don’t need to be a machine learning expert to use them. You can handle nearly all the tasks you would with OpenAI but run them on your own server. Just install Ollama, select a model, and call the Ollama APIs using a .NET library.
However, Ollama is too resource-intensive to run on mobile devices. Therefore, we’ll deploy it to an ASP.NET Core server and access it from a .NET MAUI client application. We’ll create a simple chatbot where users can type messages...