Generating text using GPT
BERT and GPT are both state-of-the-art NLP models based on the Transformer architecture. However, they differ in their architectures, training objectives, and use cases. We will first learn more about GPT and then generate our own version of War and Peace with a fine-tuned GPT model.
Pre-training of GPT and autoregressive generation
GPT (Improving Language Understanding by Generative Pre-training by Alec Radford et al. 2018) is a decoder-only Transformer architecture, while BERT is encoder only. This means GPT utilizes masked self-attention in the decoders and emphasizes predicting the next token in a sequence.
Think of BERT like a super detective. It gets a sentence with some words hidden (masked) and has to guess what they are based on the clues (surrounding words) in both directions, like looking at a crime scene from all angles. GPT, on the other hand, is more like a creative storyteller. It is pre-trained using an autoregressive language...