Grammatical Error Correction (GEC) is the task of correcting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors.
GEC is typically formulated as a sentence correction task. A GEC system takes a potentially erroneous sentence as input and is expected to transform it to its corrected version. See the example given below:
|Input (Erroneous)||Output (Corrected)|
|She see Tom is catched by policeman in park at last night.||She saw Tom caught by a policeman in the park last night.|
( Image credit: Ge et al. )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The lack of large-scale datasets has been a major hindrance to the development of NLP tasks such as spelling correction and grammatical error correction (GEC).
Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated.
In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder.
Ranked #1 on Grammatical Error Correction on BEA-2019 (test)
It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task.
Ranked #1 on Grammatical Error Correction on JFLEG
We improve automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network.
Ranked #3 on Grammatical Error Correction on _Restricted_
Previously, neural methods in grammatical error correction (GEC) did not reach state-of-the-art results compared to phrase-based statistical machine translation (SMT) baselines.
Ranked #2 on Grammatical Error Correction on _Restricted_
We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC).
We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for developing and evaluating grammatical error correction (GEC).
The resulting parallel corpora are subsequently used to pre-train Transformer models.