Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!
This article is an excerpt from the book, Modern Generative AI with ChatGPT and OpenAI Models, by Valentina Alto. Harness the power of AI with innovative, real-world applications, and unprecedented productivity boosts, powered by the latest advancements in AI technology like ChatGPT and OpenAI
In this article, we’ll focus on the enterprise-level applications of OpenAI models and introduce the partnership between OpenAI and Microsoft and Azure OpenAI (AOAI) Service. We will go through the milestones and developments of Microsoft in the field of artificial intelligence (AI), highlighting the journey that brought the Azure cloud into the game of OpenAI, and why this is a game-changer for large organizations. Finally, we will consider the topic of responsible AI and how to make sure your AI system complies with ethical standards.
In this article, we will discuss the following topics:
The following are the technical requirements for this article:
AOAI Service is a product of Microsoft that provides REST API access to OpenAI’s powerful language models such as GPT-3.5, Codex, and DALL-E. You can use these models for the very same tasks as OpenAI models, such as content generation, summarization, semantic search, natural language, and code translation.
In the context of the Microsoft Azure AI portfolio, AOAI Service is collocated among the following Cognitive Services offerings:
Figure - AOAI Service General Availability (GA)
As with any other Cognitive Services offering, AOAI offers models that have already been trained and are ready to be consumed.
To create your AOAI resource, follow these instructions:
1. Navigate to the Azure portal at https://ms.portal.azure.com.
2. Click on Create a resource.
3. Type azure openai and click on Create.
4. Fill in the required information and click on Review + create.
This is shown in the following screenshot:
Figure – Steps to create an AOAI resource
This process might take a few minutes. Once it is ready, you can directly jump to its user-friendly interface, AOAI Playground, to test your models before deploying them:
Figure – AOAI UI and Playground
Note that AOAI Playground looks almost identical to the OpenAI Playground version we saw in Chapter 2. The difference here is that, to use AOAI models, you have to initiate a deployment, which is a serverless compute instance you can attach to a model. You can do so either in Playground or on the resource backend page in the Azure portal:
Figure – Creating a new AOAI deployment via Playground (A) or in the Azure portal (B)
For example, I created a deployment called text-davinci-003 with an associated textdavinci-003 model:
Figure 9.7 – An active deployment of AOAI
In OpenAI Playground, we can test those models either directly via the user interface or by embedding their APIs into our applications. In the next section, we are going to explore how to interact with Playground and try different models’ configurations. In Chapter 10, we will learn how to integrate AOAI’s Models API into enterprise applications.
AOAI Playground is the easiest way to get familiar with the underlying models and start planning which model’s version is the most suitable for your projects. The user interface presents different tabs and workspaces, as shown in the following screenshot:
Figure - Overview of AOAI Playground
Let’s explore each of them:
Figure – AOAI Chat workspace
It offers a similar experience to ChatGPT itself, with the possibility to configure your model with additional parameters (as we saw in Chapter 2 with OpenAI Playground). Furthermore, there is an additional feature that makes the Chat workspace very interesting, known as System message:
Figure – Example of System message
System message is the set of instructions we give the model to tell it how to behave and interact with us. As for the prompt, System message represents a key component of a model’s configuration since it massively affects model performance.
For example, let’s instruct our model to behave as a JSON formatter assistant:
Figure – Example of a model acting as a JSON formatter assistant
As you can see from the previous screenshot, the model was able to suggest a JSON file through some simple data, such as name and age, without the need to specify any labels.
For example, we could ask our model to generate a quiz by giving it a description of the topic and a one-shot example, as shown here:
Figure – Example of a GPT model generating a quiz
Finally, as per the Chat workspace, with Completions, you can configure parameters such as the maximum number of tokens or the temperature (refer to Chapter 2 for a comprehensive list of those parameters and their meanings).
Figure – List of AOAI deployments
Each deployment can host only one model. You can edit or delete your deployments at any time. As we mentioned previously, a model deployment is the enabler step for using either the Completions or Chat workspace within AOAI Service.
Figure – List of AOAI models
Here, we have text-similarity-curie-001. It doesn’t have an associated deployment, so it can be deployed (as the Deployable column shows). On the other hand, text-similarityada-002 already has a deployment, so it is not available anymore.
Within this tab, you can also create a custom model by following a procedure called fine-tuning.
We explored this in Chapter 2:
Figure – Example of model fine-tuning
Starting from this guided widget, you can upload your training and validation data to produce a customized model, starting from a base model (namely, text-davinci-002), which will be hosted on a dedicated deployment.
In Chapter 2, we saw that the training dataset should align with a specific format of the following type (called JSONL):
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
...
To facilitate this formatting, OpenAI has developed a tool that can format your data into this specific format ready for fine-tuning. It can also provide suggestions on how to modify data so that the tool can be used for fine-tuning. Plus, it accepts various data formats as inputs, including CSV, TXT, and JSON.
To use this tool, you can initialize the OpenAI command-line interface (CLI) by running the following command: pip install --upgrade openai
Once initialized, you can run the tool, as follows:
openai tools fine_tunes.prepare_data -f <LOCAL_FILE>
Figure – Example of uploading a file within AOAI Service
You can decide to upload files by selecting Local file or Azure blob or other shared web locations.
Once you’ve uploaded your files, you will be able to select them while creating customized models, via the Models tab.
Finally, as mentioned in the previous section, each model comes with a REST API that can be consumed in your applications.
In this article, we saw how the partnership between OpenAI and Microsoft has brought about a powerful and innovative AI solution for enterprise-level organizations: AOAI. This service combines OpenAI’s cutting-edge technology with Microsoft’s extensive cloud infrastructure to provide businesses with a scalable and customizable platform for building and deploying advanced AI applications.
We also dwelled on Microsoft’s strong focus on responsible AI practices and ethics, and how AOAI Service reflects this commitment to responsible AI, with features such as a content filter built into the platform.
As AI continues to transform industries and shape our future, the collaboration between OpenAI and Microsoft marks an important milestone in the development of enterprise-level AI solutions. AOAI empowers businesses to harness the power of AI to drive growth and innovation while ensuring ethical and responsible practices.
Valentina Alto graduated in 2021 in data science. Since 2020, she has been working at Microsoft as an Azure solution specialist, and since 2022, she has been focusing on data and AI workloads within the manufacturing and pharmaceutical industry. She has been working closely with system integrators on customer projects to deploy cloud architecture with a focus on modern data platforms, data mesh frameworks, IoT and real-time analytics, Azure Machine Learning, Azure Cognitive Services (including Azure OpenAI Service), and Power BI for dashboarding. Since commencing her academic journey, she has been writing tech articles on statistics, machine learning, deep learning, and AI in various publications and has authored a book on the fundamentals of machine learning with Python.