Summary
This chapter covered the currently best-performing techniques in NLP – transformers and pretrained models. In addition, we have demonstrated how they can be applied to processing your own application-specific data, using both local pretrained models and cloud-based models.
Specifically, you learned about the basic concepts behind attention, transformers, and pretrained models, and then applied the BERT pretrained transformer system to a classification problem. Finally, we looked at using the cloud-based GPT-3 systems for generating data and for processing application-specific data.
In Chapter 12, we will turn to a different topic – unsupervised learning. Up to this point, all of our models have been supervised, which you will recall means that the data has been annotated with the correct processing result. Next, we will discuss applications of unsupervised learning. These applications include topic modeling and clustering. We will also talk about the value...