Summary
This chapter analyzed the difference between the human language representation process and the way machine intelligence performs transduction. We saw that transformers must rely on the outputs of our incredibly complex thought processes expressed in written language. Language remains the most precise way to express a massive amount of information. The machine has no senses and must convert speech to text to extract meaning from raw datasets.
We then explored how to measure the performance of multi-task transformers. Transformers’ ability to obtain top-ranking results for downstream tasks is unique in NLP history. We went through the tough SuperGLUE tasks that brought transformers up to the top ranks of the GLUE and SuperGLUE leaderboards.
BoolQ, CB, WiC, and the many other tasks we covered are by no means easy to process, even for humans. We went through an example of several downstream tasks that show the difficulty transformer models face in proving their...