Changing the model parameter and understanding its impact on generated responses
In both Chapter 1 and Chapter 2, the chat completion requests were made using both the model and messages parameters, with model
always being equal to the gpt-3.5-turbo
value. We essentially ignored the model parameter. However, this parameter likely has the biggest impact on the generated responses of any other parameter. Contrary to popular belief, the OpenAI API is not just one model; it’s powered by a diverse set of models with different capabilities and price points.
In this recipe, we will cover two main models (GPT-3.5 and GPT-4), learn how to change the model
parameter, and observe how the generated responses vary between these two models.
Getting ready
Ensure you have an OpenAI Platform account with available usage credits. If you don’t, please follow the Setting up your OpenAI Playground environment recipe in Chapter 1.
Furthermore, ensure that you have Postman installed...