Using the Python library to call the OpenAI API
Previously, we used HTTP requests and Postman to call the OpenAI API. Now, we are transferring to another method of calling the API, through Python with the dedicated OpenAI Python library. Why does this matter and why is this important?
Utilizing the Python library for OpenAI API calls offers a significant advantage over manual HTTP requests in tools such as Postman, especially for developers looking to integrate ChatGPT functionality into their applications seamlessly.
Python’s library simplifies the intricacies involved in making direct HTTP requests by offering a more user-friendly and intuitive interface. This facilitates quick prototyping, streamlined error management, and efficient parsing of responses. The library wraps the fundamental details of the protocol, allowing developers to concentrate on their application’s essential functionality without being bogged down by the specifics of request headers, query strings, and HTTP methods.
Furthermore, Python’s extensive package ecosystem readily supports the integration of the OpenAI API with other services and systems, allowing for a scalable and maintainable code base.
Overall, if you are serious about building intelligent applications with the OpenAI API, you need to call the API with a programmatic language that enables complex logic and tie-ins to other systems. Python, through the OpenAI library, is one way to accomplish that.
In this recipe, we will create some simple API calls using Python and the OpenAI library. More information on the library can be found here: https://github.com/openai/openai-python.
Getting ready
Ensure you have an OpenAI platform account with available usage credits. If you don’t, please follow the Setting up your OpenAI Playground environment recipe in Chapter 1.
Furthermore, ensure you are logged in to a Google account and have access to a notebook. You can verify this by going to https://colab.google/ and selecting New Notebook at the top right. After that, you should have a blank screen with an empty notebook open.
All the recipes in this chapter have the same requirements.
How to do it…
- In your Google Colab notebook, click the first empty cell, and type in the following code to download and install the OpenAI Python library. After you have typed the code in, press Shift + Enter to run the code inside the cell. Alternatively, you can run the code inside the cell by clicking the Play button to the left of the cell. This code will attempt to install the OpenAI Python library and all its dependencies. You may see output such as
Requirements already satisfied
orInstalling httpcore
. This is Google attempting to install the libraries that OpenAI depends on to run its own library, and is perfectly normal:!pip install openai from openai import OpenAI
- Ensure that the words
Successfully installed openai-X.XX.X
are visible, as seen in Figure 4.1.
Figure 4.1 – Output of Jupyter notebook after installing the OpenAI library
- Next, we need to perform authentication. This is similar to the previous chapters where we had to authenticate our Postman requests by putting our API key in a Header parameter called Authorization. In Python, it’s much simpler. In the cell below the one you used in step 1, write the following code and press Shift + Enter. Note, replace
<api-key>
with the API key that you generated in the last recipe in Chapter 1:api_key = "<api-key>" client = OpenAI(api_key=api_key)
- We will now make a chat completion request to the OpenAI API. Similar to Postman, we can use different endpoints and define a variety of different parameters within the request in Python. Type the following code into a new cell below and press Shift + Enter, which runs the code and saves the output in a variable called
completion
:completion = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {'role': 'system', 'content': 'You are an assistant that creates a slogan based on company description'}, {"role": "user", "content": "A company that sells ice cream"} ], n=1, temperature=1 )
- Output the
completion
variable, which is aChatCompletion
object. We can convert this into the more familiar JSON format (exactly as in Postman) by typing the following in the cell below and running the code by pressing Shift + Enter:import json completion_json = json.loads(completion.json()) print(completion_json)
Figure 4.2 shows the output that you will see after running this code.
Figure 4.2 – JSON output of the Python OpenAI completion request
- Using Python, we can parse through the JSON and only output the part of the JSON that contains the company slogan. We can do this by typing the following code into the cell below and pressing Shift + Enter to run the code:
print(completion_json['choices'][0]['message']['content'])
Figure 4.3 – Input and output of step 6
- You now have a working Python Jupyter notebook that calls the OpenAI API, makes a chat completion request, and outputs the result.
How it works…
In this recipe, we performed the same actions as we have done in previous recipes, the difference being that we used the OpenAI Python library instead of invoking HTTP requests through Postman. We authenticated using our API key, made a chat completion request, and adjusted several parameters (such as Model, Messages, N, and Temperature), and printed the output result.
Code walk-through
The code that was run within the recipe can be explained in four parts:
- Library installation: The first line –
!pip install openai; import openai
– is a command that installs the OpenAI library as a package in Python. The second line imports it into the current Python namespace, enabling the use of the library’s functions and classes. - Authentication: The
openai.api_key = "sk-..."
line sets the API key for authenticating requests to the OpenAI API. - API call: The
openai.ChatCompletion.create()
line calls the API and makes a chat completion request. As you can see, it contains the typical parameters that we have discussed in previous chapters. - Output: The
print(completion); print(completion['choices'][0]['message']['content'])
line prints out the raw response from the API call. The response includes not only the content of the completion but also some metadata, similar to when we make HTTP requests with Postman. This second line digs into the response object to extract and print only the content of the message.
Most API calls in Python follow these steps. It should be noted that steps 1 and 2 (i.e., library installation and authentication) only need to be performed once. This is because once a library is installed, it becomes a part of your Python environment, ready to be used in any program without needing to be reinstalled each time. Similarly, authentication, which is often a process of verifying credentials to gain access to the API, is typically required only once per session or configuration, as your credentials are then stored and reused for subsequent API calls.
Overall, we delved into using the OpenAI Python library for interacting with the OpenAI API, transitioning from the HTTP requests method in Postman. We will continue following this process in future recipes.
Components of the Python library
The endpoints and parameters that we have discussed in previous chapters are all available within the OpenAI Python library. The syntax is slightly different, as we are now using Python code rather than JSON (through Postman) to make API requests, but the fundamental idea is the same. Here is a table that compares endpoint calls between Postman and Python libraries.
Endpoint |
HTTP request in Postman through JSON (the Body component) |
Python OpenAI Library |
Chat completions |
{ "model": "gpt-3.5-turbo", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] } |
completion = client.chat.completions.create ( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ] ) |
Images |
{ "prompt": "A cute baby sea otter", "n": 2, "size": "1024x1024" } |
client.images.generate( prompt="A cute baby sea otter", n=2, size="1024x1024" ) |
Audio |
-F file="@/path/to/file/audio.mp3" \ -F model="whisper-1" |
audio_file = open("audio.mp3", "rb") transcript = client.audio.transcriptions.create ("whisper-1", audio_file) |
Table 4.1 – Comparing endpoint calls between Postman and Python libraries
Benefits and drawbacks of using the Python library
There are several benefits to doing this, aside from it just being a pre-requisite to future recipes. It provides abstraction over the API request itself, leading to the following benefits:
- Simplified authentication: The library handles API key and token management, abstracting away the details of the authentication process from the user. For example, in this case, we did not need to create a new parameter for Bearer, unlike within HTTP. Furthermore, unlike HTTP requests, we do not need to declare our API key for every single request.
- Ease of use: It provides a high-level interface with methods and classes that represent API endpoints, making it easier to understand and implement; the library takes care of constructing the correct HTTP requests, encoding parameters, and parsing the responses.
- Do more: The library often includes convenience features that are not available with simple HTTP requests, such as pagination helpers, streaming, session management, embeddings, function calls, and more (which is why we switched over to the Python library in this chapter – the subsequent recipes cover these features).
- Programmability: The Python OpenAI library leverages the full programming capabilities of Python, enabling variables, logical conditioning, and functions (i.e., all the benefits of a programming language that you don’t get with Postman).
There are, however, some specific downsides to using the Python library as well:
- Limited customization: High-level abstraction may limit direct access to certain API functionalities
- Maintenance and compatibility: There is a dependency on library updates and potential conflicts with different Python versions
- Performance overheads: Additional abstraction layers can lead to slower performance in resource-critical applications
- Reduced control: It offers less flexibility for users needing detailed control over API interactions