Summary
In this chapter, we explored SRL. SRL tasks are difficult for both humans and machines. Transformer models have shown that human baselines can be reached for many NLP topics to a certain extent.
We found that a simple BERT-based transformer can perform predicate sense disambiguation. We ran a simple transformer that could identify the meaning of a verb (predicate) without lexical or syntactic labeling. Shi and Lin (2019) used a standard "sentence + verb" input format to train their BERT-based transformer.
We found that a transformer trained with a stripped-down "sentence + predicate" input could solve simple and complex problems. The limits were reached when we used relatively rare verb forms. However, these limits are not final. If difficult problems are added to the training dataset, the research team could improve the model.
We also discovered that AI for the good of humanity exists. The Allen Institute for AI has made many free AI resources...