Grammatical Error Correction
121 papers with code • 11 benchmarks • 15 datasets
Grammatical Error Correction (GEC) is the task of correcting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors.
GEC is typically formulated as a sentence correction task. A GEC system takes a potentially erroneous sentence as input and is expected to transform it to its corrected version. See the example given below:
Input (Erroneous) | Output (Corrected) |
---|---|
She see Tom is catched by policeman in park at last night. | She saw Tom caught by a policeman in the park last night. |
Libraries
Use these libraries to find Grammatical Error Correction models and implementationsDatasets
Latest papers with no code
Beyond English: Evaluating LLMs for Arabic Grammatical Error Correction
Our best model achieves a new SOTA on Arabic GEC, with $73. 29$ and $73. 26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively, compared to peer-reviewed published baselines.
Grammatical Error Correction via Mixed-Grained Weighted Training
In this paper, the inherent discrepancies are manifested in two aspects, namely, accuracy of data annotation and diversity of potential annotations.
Efficient Grammatical Error Correction Via Multi-Task Training and Optimized Training Schedule
Progress in neural grammatical error correction (GEC) is hindered by the lack of annotated training data.
Towards End-to-End Spoken Grammatical Error Correction
This foundation model can be used to replace the whole framework or part of it, e. g., ASR and disfluency removal.
HTEC: Human Transcription Error Correction
Therefore, we propose HTEC for Human Transcription Error Correction.
ChatGPT for Arabic Grammatical Error Correction
Recently, large language models (LLMs) fine-tuned to follow human instruction have exhibited significant capabilities in various English NLP tasks.
On the (In)Effectiveness of Large Language Models for Chinese Text Correction
Recently, the development and progress of Large Language Models (LLMs) have amazed the entire Artificial Intelligence community.
On the application of Large Language Models for language teaching and assessment technology
The recent release of very large language models such as PaLM and GPT-4 has made an unprecedented impact in the popular media and public consciousness, giving rise to a mixture of excitement and fear as to their capabilities and potential uses, and shining a light on natural language processing research which had not previously received so much attention.
Evaluating the Capability of Large-scale Language Models on Chinese Grammatical Error Correction Task
Large-scale language models (LLMs) has shown remarkable capability in various of Natural Language Processing (NLP) tasks and attracted lots of attention recently.
Leveraging Denoised Abstract Meaning Representation for Grammatical Error Correction
Experiments on the BEA-2019 shared task and the CoNLL-2014 shared task have shown that AMR-GEC performs comparably to a set of strong baselines with a large number of synthetic data.