Democratizing NLP
Anyone can use GPT-3 with access to the OpenAI API. The API is a general-purpose text in, text out interface that could be used for virtually any language task. To use the API, you simply pass in text and get a text response back. The task might be to do sentiment analysis, write an article, answer a question, or summarize a document. It doesn't matter, as far as the API is concerned—it's all done the same way, which makes using the API easy enough for just about anyone to use, even non-programmers.
The text you pass in is referred to as a prompt, and the returned text is called a completion. A prompt is used by GPT-3 to determine how best to complete the task. In the simplest case, a prompt can provide a few words to get started with. For example, if the prompt was If today is Monday, tomorrow is, GPT-3 would likely respond with Tuesday, along with some additional text such as If today is Tuesday, tomorrow is Wednesday, and so on. This means that what you get out of GPT-3 depends on what you send to it.
As you might guess, the quality of a completion depends heavily on the prompt. GPT-3 uses all of the text in a prompt to help generate the most relevant completion. Each and every word, along with how the prompt is structured, helps improve the language model prediction results. So, understanding how to write and test prompts is the key to unlocking GPT-3's true potential.