Summary
In this chapter, we analyzed the difference between the human language representation process and the way machine intelligence has to perform transduction. We saw that transformers must rely on the outputs of our incredibly complex thought process expressed in written language. Language remains the most precise way to express a massive amount of information. The machine has no senses and must convert speech to text to extract meaning from raw datasets.
We then explored how to measure the performance of multi-task transformers. Transformers' ability to obtain top ranking results for downstream tasks is unique in the history of NLP. We went through the tough SuperGLUE tasks that brought transformers up to the top ranks of the GLUE and SuperGLUE leaderboards.
BoolQ, CB, WiC, and the many other tasks we covered are by no means easy to process, even for humans. We went through an example of several downstream tasks that show the difficulty transformer models must face...