Summary
In this chapter, we discovered the new era of transformer models training billions of parameters on supercomputers. OpenAI’s GPT models are taking NLU beyond the reach of most NLP development teams.
We saw how a GPT-3 zero-shot model performs many NLP tasks through an API and even directly online without an API. The online version of Google Translate has already paved the way for mainstream online usage of AI.
We explored the design of GPT models, which are all built on the original transformer’s decoder stack. The masked attention sub-layer continues the philosophy of left-to-right training. However, the sheer power of the calculations and the subsequent self-attention sub-layer makes it highly efficient.
We then implemented a 345M parameter GPT-2 model with TensorFlow. The goal was to interact with a trained model to see how far we could get with it. We saw that the context provided conditioned the outputs. However, it did not reach the results...