BERT and GPT models
BERT and GPT models use raw data as input and their main output is one predicted word. This word can be predicted both in the middle of a sentence and at the end of it. This means that the products that are designed around these models need to process data differently than in the other models.
Figure 9.3 provides an overview of this kind of processing with a focus on both prompt engineering in the beginning and output processing in the end. This figure shows the machine learning models based on the BERT or GPT architecture in the center. This is an important aspect, but it only provides a very small element of the entire system (or tool).
The tool’s workflow starts on the left-hand side with input processing. For the user, it is a prompt that asks the model to do something, such as "Write a function that reverses a string in C"
. The tool turns that prompt into a useful input for the model – it can find a similar C program as input for...