Generalizing Natural Language Analysis through Span-relation Representations

ACL 2020  ยท  Zhengbao Jiang, Wei Xu, Jun Araki, Graham Neubig ยท

Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks spanning dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving performance comparable to state-of-the-art specialized models. We further demonstrate benefits of multi-task learning, and also show that the proposed method makes it easy to analyze differences and similarities in how the model handles different tasks. Finally, we convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL 2003 (English) SpanRel F1 92.2 # 45
Semantic Role Labeling (predicted predicates) CoNLL 2012 SpanRel F1 82.4 # 5
Part-Of-Speech Tagging Penn Treebank SpanRel Accuracy 97.7 # 6
Constituency Parsing Penn Treebank SpanRel F1 score 95.5 # 11
Dependency Parsing Penn Treebank SpanRel UAS 96.44 # 9
LAS 94.70 # 9
Relation Extraction SemEval-2010 Task-8 SpanRel F1 87.4 # 23
Relation Extraction WLPC SpanRel F1 65.5 # 1
Named Entity Recognition (NER) WLPC SpanRel F1 79.2 # 2

Methods