Grammatical Error Detection

17 papers with code • 4 benchmarks • 4 datasets

Grammatical Error Detection (GED) is the task of detecting different kinds of errors in text such as spelling, punctuation, grammatical, and word choice errors. Grammatical error detection (GED) is one of the key component in grammatical error correction (GEC) community.

Most implemented papers

Semi-supervised Multitask Learning for Sequence Labeling

marekrei/sequence-labeler ACL 2017

We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset.

Jointly Learning to Label Sentences and Tokens

marekrei/mltagger 14 Nov 2018

Learning to construct text representations in end-to-end systems can be difficult, as natural languages are highly compositional and task-specific annotated datasets are often limited in size.

FCGEC: Fine-Grained Corpus for Chinese Grammatical Error Correction

xlxwalex/FCGEC 22 Oct 2022

Grammatical Error Correction (GEC) has been broadly applied in automatic correction and proofreading system recently.

Bangla Grammatical Error Detection Using T5 Transformer Model

ramisa2108/bangla-complex-named-entity-recognition-challenge 19 Mar 2023

This paper presents a method for detecting grammatical errors in Bangla using a Text-to-Text Transfer Transformer (T5) Language Model, using the small variant of BanglaT5, fine-tuned on a corpus of 9385 sentences where errors were bracketed by the dedicated demarcation symbol.

Grammatical Error Detection Using Error- and Grammaticality-Specific Word Embeddings

kanekomasahiro/grammatical-error-detection IJCNLP 2017

In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns.

Wronging a Right: Generating Better Errors to Improve Grammatical Error Detection

skasewa/wronging EMNLP 2018

Grammatical error correction, like other machine learning tasks, greatly benefits from large quantities of high quality training data, which is typically expensive to produce.

Sequence Classification with Human Attention

coastalcph/Sequence_classification_with_human_attention CONLL 2018

Learning attention functions requires large volumes of data, but many NLP tasks simulate human behavior, and in this paper, we show that human attention really does provide a good inductive bias on many attention functions in NLP.

Detecting Local Insights from Global Labels: Supervised & Zero-Shot Sequence Labeling via a Convolutional Decomposition

allenschmaltz/exa 4 Jun 2019

From this sequence-labeling layer we derive dense representations of the input that can then be matched to instances from training, or a support set with known labels.

Context is Key: Grammatical Error Detection with Contextual Word Representations

samueljamesbell/sequence-labeler WS 2019

Grammatical error detection (GED) in non-native writing requires systems to identify a wide range of errors in text written by language learners.