|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset.
Ranked #3 on Grammatical Error Detection on FCE
Learning to construct text representations in end-to-end systems can be difficult, as natural languages are highly compositional and task-specific annotated datasets are often limited in size.
Ranked #1 on Grammatical Error Detection on FCE
In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns.
Grammatical error correction, like other machine learning tasks, greatly benefits from large quantities of high quality training data, which is typically expensive to produce.