Grammatical Error Correction
117 papers with code • 11 benchmarks • 15 datasets
Grammatical Error Correction (GEC) is the task of correcting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors.
GEC is typically formulated as a sentence correction task. A GEC system takes a potentially erroneous sentence as input and is expected to transform it to its corrected version. See the example given below:
Input (Erroneous) | Output (Corrected) |
---|---|
She see Tom is catched by policeman in park at last night. | She saw Tom caught by a policeman in the park last night. |
Libraries
Use these libraries to find Grammatical Error Correction models and implementationsDatasets
Latest papers with no code
Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction
Large Language Models (LLMs) have been reported to outperform existing automatic evaluation metrics in some tasks, such as text summarization and machine translation.
LM-Combiner: A Contextual Rewriting Model for Chinese Grammatical Error Correction
In this light, we propose the LM-Combiner, a rewriting model that can directly modify the over-correction of GEC system outputs without a model ensemble.
To Err Is Human, but Llamas Can Learn It Too
This study explores enhancing grammatical error correction (GEC) through artificial error generation (AEG) using language models (LMs).
Revisiting Meta-evaluation for Grammatical Error Correction
The results of improved correlations by aligning the granularity in the sentence-level meta-evaluation, suggest that edit-based metrics may have been underestimated in existing studies.
Neural Automated Writing Evaluation with Corrective Feedback
The utilization of technology in second language learning and teaching has become ubiquitous.
mEdIT: Multilingual Text Editing via Instruction Tuning
We introduce mEdIT, a multi-lingual extension to CoEdIT -- the recent state-of-the-art text editing models for writing assistance.
Likelihood-based Mitigation of Evaluation Bias in Large Language Models
In this paper, we investigate the presence and impact of likelihood bias in LLM-based evaluators.
Evaluating Prompting Strategies for Grammatical Error Correction Based on Language Proficiency
The writing examples of English language learners may be different from those of native speakers.
Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction
To promote the CGEC field to better adapt to the era of LLMs, we rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC.
Alirector: Alignment-Enhanced Chinese Grammatical Error Corrector
Then, we combine the source sentence with the initial correction and feed it through an alignment model for another round of correction, aiming to enforce the alignment model to focus on potential overcorrection.