Chatting with Semantic Kernel functions
Semantic Kernel can be configured to use LLMs from OpenAI, Azure OpenAI, and locally hosted LLMs using LLamaSharp (see Further reading for resources on LLamaSharp).
For consistency, we’ll use the same LLMs we used in the previous chapter: the chat and text embeddings we deployed on Azure OpenAI.
Building the Kernel
In order to reference our deployed models, we’ll need the model name, endpoint, and a key from our Azure OpenAI Service.
We’ll get these values after using C# in a similar manner to how we did in the last chapter:
using Microsoft.DotNet.Interactive; string chatModelName = "gpt-4o-deployment"; string prompt = "Enter your Azure OpenAI Endpoint"; string url = await Kernel.GetInputAsync(prompt); prompt = "Enter your Azure OpenAI key"; PasswordString key = await Kernel.GetPasswordAsync(prompt);
Next, we’ll need to reference the Semantic Kernel SDK for .NET via its...