Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

OpenAI and ChatGPT for Enterprises

Save for later
  • 9 min read
  • 14 Sep 2023

article-image

Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!

This article is an excerpt from the book, Modern Generative AI with ChatGPT and OpenAI Models, by Valentina Alto. Harness the power of AI with innovative, real-world applications, and unprecedented productivity boosts, powered by the latest advancements in AI technology like ChatGPT and OpenAI

Introduction

In this article, we’ll focus on the enterprise-level applications of OpenAI models and introduce the partnership between OpenAI and Microsoft and Azure OpenAI (AOAI) Service. We will go through the milestones and developments of Microsoft in the field of artificial intelligence (AI), highlighting the journey that brought the Azure cloud into the game of OpenAI, and why this is a game-changer for large organizations. Finally, we will consider the topic of responsible AI and how to make sure your AI system complies with ethical standards.

In this article, we will discuss the following topics:

  • The history of the partnership between Microsoft and OpenAI and the introduction of AOAI Service
  • The role of the public cloud in the context of OpenAI models
  •  Responsible AI

Technical requirements

The following are the technical requirements for this article: 

Azure OpenAI Service

AOAI Service is a product of Microsoft that provides REST API access to OpenAI’s powerful language models such as GPT-3.5, Codex, and DALL-E. You can use these models for the very same tasks as OpenAI models, such as content generation, summarization, semantic search, natural language, and code translation.

In the context of the Microsoft Azure AI portfolio, AOAI Service is collocated among the following Cognitive Services offerings:

openai-and-chatgpt-for-enterprises-img-0

Figure - AOAI Service General Availability (GA)

As with any other Cognitive Services offering, AOAI offers models that have already been trained and are ready to be consumed.

To create your AOAI resource, follow these instructions:

1.      Navigate to the Azure portal at https://ms.portal.azure.com.

2.      Click on Create a resource.

3.      Type azure openai and click on Create.

4.      Fill in the required information and click on Review + create.

This is shown in the following screenshot:

openai-and-chatgpt-for-enterprises-img-1

Figure  – Steps to create an AOAI resource

This process might take a few minutes. Once it is ready, you can directly jump to its user-friendly interface, AOAI Playground, to test your models before deploying them:

openai-and-chatgpt-for-enterprises-img-2

Figure  – AOAI UI and Playground

Note that AOAI Playground looks almost identical to the OpenAI Playground version we saw in Chapter 2. The difference here is that, to use AOAI models, you have to initiate a deployment, which is a serverless compute instance you can attach to a model. You can do so either in Playground or on the resource backend page in the Azure portal:

openai-and-chatgpt-for-enterprises-img-3

Figure – Creating a new AOAI deployment via Playground (A) or in the Azure portal (B)

For example, I created a deployment called text-davinci-003 with an associated textdavinci-003 model:

openai-and-chatgpt-for-enterprises-img-4

Figure 9.7 – An active deployment of AOAI

In OpenAI Playground, we can test those models either directly via the user interface or by embedding their APIs into our applications. In the next section, we are going to explore how to interact with Playground and try different models’ configurations. In Chapter 10, we will learn how to integrate AOAI’s Models API into enterprise applications.

Exploring Playground

AOAI Playground is the easiest way to get familiar with the underlying models and start planning which model’s version is the most suitable for your projects. The user interface presents different tabs and workspaces, as shown in the following screenshot:

openai-and-chatgpt-for-enterprises-img-5

Figure - Overview of AOAI Playground

Let’s explore each of them:

  • Playground | Chat: The Chat workspace is designed to be only used with conversational models such as GPT-3.5-turbo (the model behind ChatGPT):
openai-and-chatgpt-for-enterprises-img-6

Figure – AOAI Chat workspace

It offers a similar experience to ChatGPT itself, with the possibility to configure your model with additional parameters (as we saw in Chapter 2 with OpenAI Playground). Furthermore, there is an additional feature that makes the Chat workspace very interesting, known as System message:

openai-and-chatgpt-for-enterprises-img-7

         Figure – Example of System message

System message is the set of instructions we give the model to tell it how to behave and interact with us. As for the prompt, System message represents a key component of a model’s configuration since it massively affects model performance.

For example, let’s instruct our model to behave as a JSON formatter assistant:

openai-and-chatgpt-for-enterprises-img-8

Figure – Example of a model acting as a JSON formatter assistant

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

As you can see from the previous screenshot, the model was able to suggest a JSON file through some simple data, such as name and age, without the need to specify any labels.

  • Playground | Completions: Different from the previous workspace, the Completions workspace offers a sort of white paper where you can interact with your models. While GPT-3.5-turbo is designed for conversational tasks (which means it can be consumed via a chatbot-like interface), the GPT-3 series contains more general-purpose models and can be used for a wide range of language tasks, such as content generation, summarization, and so on.

For example, we could ask our model to generate a quiz by giving it a description of the topic and a one-shot example, as shown here:

openai-and-chatgpt-for-enterprises-img-9

Figure – Example of a GPT model generating a quiz

Finally, as per the Chat workspace, with Completions, you can configure parameters such as the maximum number of tokens or the temperature (refer to Chapter 2 for a comprehensive list of those parameters and their meanings).

  • Management | Deployments: Within the Deployments tab, you can create and manage new deployments to be associated with AOAI models. They are depicted here:
openai-and-chatgpt-for-enterprises-img-10

Figure – List of AOAI deployments

Each deployment can host only one model. You can edit or delete your deployments at any time. As we mentioned previously, a model deployment is the enabler step for using either the Completions or Chat workspace within AOAI Service.

  • Management | Models: Within this tab, you can quickly assess the models that are available within AOAI Service and, among them, those that can be deployed (that is, a model that hasn’t been deployed yet). For example, let’s consider the following screenshot:
openai-and-chatgpt-for-enterprises-img-11

Figure – List of AOAI models

Here, we have text-similarity-curie-001. It doesn’t have an associated deployment, so it can be deployed (as the Deployable column shows). On the other hand, text-similarityada-002 already has a deployment, so it is not available anymore. 

Within this tab, you can also create a custom model by following a procedure called fine-tuning.

We explored this in Chapter 2:

openai-and-chatgpt-for-enterprises-img-12

Figure – Example of model fine-tuning

Starting from this guided widget, you can upload your training and validation data to produce a customized model, starting from a base model (namely, text-davinci-002), which will be hosted on a dedicated deployment.

Note

In Chapter 2, we saw that the training dataset should align with a specific format of the following type (called JSONL):

{"prompt": "<prompt text>", "completion": "<ideal generated text>"}

{"prompt": "<prompt text>", "completion": "<ideal generated text>"}

{"prompt": "<prompt text>", "completion": "<ideal generated text>"}

...

To facilitate this formatting, OpenAI has developed a tool that can format your data into this specific format ready for fine-tuning. It can also provide suggestions on how to modify data so that the tool can be used for fine-tuning. Plus, it accepts various data formats as inputs, including CSV, TXT, and JSON.

To use this tool, you can initialize the OpenAI command-line interface (CLI) by running the following command: pip install --upgrade openai

Once initialized, you can run the tool, as follows:

openai tools fine_tunes.prepare_data -f <LOCAL_FILE>

  • Management | File Management: Finally, within the File Management tab, you can govern and upload your training and test data directly from the user interface, as shown here:

openai-and-chatgpt-for-enterprises-img-13

Figure – Example of uploading a file within AOAI Service

You can decide to upload files by selecting Local file or Azure blob or other shared web locations

Once you’ve uploaded your files, you will be able to select them while creating customized models, via the Models tab.

Finally, as mentioned in the previous section, each model comes with a REST API that can be consumed in your applications.

Conclusion

In this article, we saw how the partnership between OpenAI and Microsoft has brought about a powerful and innovative AI solution for enterprise-level organizations: AOAI. This service combines OpenAI’s cutting-edge technology with Microsoft’s extensive cloud infrastructure to provide businesses with a scalable and customizable platform for building and deploying advanced AI applications.

We also dwelled on Microsoft’s strong focus on responsible AI practices and ethics, and how AOAI Service reflects this commitment to responsible AI, with features such as a content filter built into the platform.

As AI continues to transform industries and shape our future, the collaboration between OpenAI and Microsoft marks an important milestone in the development of enterprise-level AI solutions. AOAI empowers businesses to harness the power of AI to drive growth and innovation while ensuring ethical and responsible practices.

Author Bio

Valentina Alto graduated in 2021 in data science. Since 2020, she has been working at Microsoft as an Azure solution specialist, and since 2022, she has been focusing on data and AI workloads within the manufacturing and pharmaceutical industry. She has been working closely with system integrators on customer projects to deploy cloud architecture with a focus on modern data platforms, data mesh frameworks, IoT and real-time analytics, Azure Machine Learning, Azure Cognitive Services (including Azure OpenAI Service), and Power BI for dashboarding. Since commencing her academic journey, she has been writing tech articles on statistics, machine learning, deep learning, and AI in various publications and has authored a book on the fundamentals of machine learning with Python.