Toward Syntax-Free Semantic Role Labeling with ChatGPT and GPT-4
Transformers have made more progress in the past few years than NLP in the past generation. Former NLP models would be trained to understand a language’s basic syntax before running Semantic Role Labeling (SRL). The NLP software contained syntax trees, rule bases, and parsers. The performance of such systems was limited by the number of combinations of words that led to an infinity of contexts.
Shi and Lin (2019) started their paper by asking if preliminary syntactic and lexical training can be skipped. Could a system become “syntax-free” and understand language without relying on pre-designed syntax trees? Could a BERT-based model perform SRL without going through those classical training phases? The answer is yes!
Shi and Lin (2019) suggested that SRL can be considered sequence labeling and provide a standardized input format. Since then, OpenAI has reached near human-level syntax-free...