Summary
In this chapter, we explored SRL. SRL tasks are difficult for both humans and machines. Transformer models have shown that human baselines can be reached for many NLP topics to a certain extent.
We first defined the revolutionary syntax-free world of recent LLM models. AI is experiencing a significant paradigm shift from task-specific training to general-purpose Generative AI models such as OpenAI’s GPT series.
We ran several examples with a general-purpose model using ChatGPT with GPT-4. We confirmed that there is no silver bullet and that the ultimate choice will depend on the goals of a project.
We found that a general-purpose LLM model can perform predicate sense disambiguation. We ran basic examples in which a transformer could identify the meaning of a verb (predicate) without lexical or syntactic labeling.
We found that a transformer trained with a stripped-down sentence + predicate input could solve simple and complex problems. Challenging tasks...