Chatting with a deployed model
Now that we have AzureOpenAIClient
, we’ll need to get a specific ChatClient
to interact with our model.
To do this, we’ll need the name of the deployment we’re interacting with. For me, that will be gpt-4o-deployment, but for you, it will be whatever you named your chat model deployment earlier in this chapter.
We can get our ChatClient
by calling GetChatClient
with the name of the deployment we want to interact with, as shown here:
using OpenAI.Chat; string chatModelName = "gpt-4o-deployment"; ChatClient chatClient = azureClient.GetChatClient(chatModelName);
Next, we’ll need to call the CompleteChat
or CompleteChatAsync
method on our new chatClient
.
Both methods require a collection of chat messages to give the model context in the conversation.
We’ll provide a pair of chat messages:
SystemChatMessage
, which represents our AI agent’s overall instructions...