Method 2: SRL first
The transformer could not find who was driving to go to Las Vegas
and thought it was from Nat King Cole
instead of Jo
and Maria
.
What went wrong? Can we see what the transformers think and obtain an explanation? To find out, let’s go back to semantic role modeling. If necessary, take a few minutes to review Chapter 10, Semantic Role Labeling with BERT-Based Transformers.
Let’s run the same sequence on AllenNLP
in the Semantic Role Labeling section, https://demo.allennlp.org/semantic-role-labeling, to obtain a visual representation of the verb drove
in our sequence by running the SRL BERT model we used in the previous chapter:
Figure 11.2: SRL run on the text
SRL BERT found 19 frames. In this section, we focus on drove
.
Note: The results may vary from one run to another or when AllenNLP updates the model versions.
We can see the problem. The argument of the verb drove
is Jo and Maria
. It seems that the inference...