Universal Dependency Parsing from Scratch

This paper describes Stanford's system at the CoNLL 2018 UD Shared Task. We introduce a complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing. Our single system submission achieved very competitive performance on big treebanks. Moreover, after fixing an unfortunate bug, our corrected system would have placed the 2nd, 1st, and 3rd on the official evaluation metrics LAS,MLAS, and BLEX, and would have outperformed all submission systems on low-resource treebank categories on all metrics by a large margin. We further show the effectiveness of different model components through extensive ablation studies.

PDF Abstract CONLL 2018 PDF CONLL 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Dependency Parsing Universal Dependencies Stanford+ LAS 74.16 # 4
UAS 62.08 # 3
BLEX 65.28 # 2

Methods


No methods listed for this paper. Add relevant methods here