Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task

Previously, neural methods in grammatical error correction (GEC) did not reach state-of-the-art results compared to phrase-based statistical machine translation (SMT) baselines. We demonstrate parallels between neural GEC and low-resource neural MT and successfully adapt several methods from low-resource MT to neural GEC. We further establish guidelines for trustable results in neural GEC and propose a set of model-independent methods for neural GEC that can be easily applied in most GEC settings. Proposed methods include adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models. The combined effects of these methods result in better than state-of-the-art neural GEC models that outperform previously best neural GEC systems by more than 10% M$^2$ on the CoNLL-2014 benchmark and 5.9% on the JFLEG test set. Non-neural state-of-the-art systems are outperformed by more than 2% on the CoNLL-2014 benchmark and by 4% on JFLEG.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Grammatical Error Correction CoNLL-2014 Shared Task Transformer F0.5 55.8 # 18
Grammatical Error Correction JFLEG Transformer GLEU 59.9 # 5
Grammatical Error Correction Restricted Transformer F0.5 55.8 # 3
Grammatical Error Correction _Restricted_ Transformer GLEU 59.9 # 1

Methods


No methods listed for this paper. Add relevant methods here