Multitask Parsing Across Semantic Representations

ACL 2018  ·  Daniel Hershcovich, Omri Abend, Ari Rappoport ·

The ability to consolidate information of different types is at the core of intelligence, and has tremendous practical value in allowing learning for one task to benefit from generalizations learned for others. In this paper we tackle the challenging task of improving semantic parsing performance, taking UCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD) parsing as auxiliary tasks. We experiment on three languages, using a uniform transition-based system and learning architecture for all parsing tasks. Despite notable conceptual, formal and domain differences, we show that multitask learning significantly improves UCCA parsing in both in-domain and out-of-domain settings.

PDF Abstract ACL 2018 PDF ACL 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
UCCA Parsing SemEval 2019 Task 1 Transition-based + MTL English-Wiki (open) F1 73.5 # 3
English-20K (open) F1 68.4 # 2

Methods


No methods listed for this paper. Add relevant methods here