Stronger Baselines for Grammatical Error Correction Using Pretrained Encoder-Decoder Model

Studies on grammatical error correction (GEC) have reported the effectiveness of pretraining a Seq2Seq model with a large amount of pseudodata. However, this approach requires time-consuming pretraining for GEC because of the size of the pseudodata... (read more)

Results in Papers With Code
(↓ scroll down to see all results)