no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Satoru Katsumata, Mamoru Komachi
In this study, we explored the utility of bidirectional and auto-regressive transformers (BART) as a generic pretrained encoder-decoder model for GEC.
no code implementations • COLING 2020 • Ikumi Yamashita, Satoru Katsumata, Masahiro Kaneko, Aizhan Imankulova, Mamoru Komachi
Cross-lingual transfer learning from high-resource languages (the source models) is effective for training models of low-resource languages (the target models) for various tasks.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Hongfei Wang, Michiki Kurosawa, Satoru Katsumata, Mamoru Komachi
In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization.
no code implementations • ACL 2020 • Yujin Takahashi, Satoru Katsumata, Mamoru Komachi
To address the limitations of language and computational resources, we assume that introducing pseudo errors into sentences similar to those written by the language learners is more efficient, rather than incorporating random pseudo errors into monolingual data.
2 code implementations • 24 May 2020 • Satoru Katsumata, Mamoru Komachi
In this study, we explore the utility of bidirectional and auto-regressive transformers (BART) as a generic pretrained encoder-decoder model for GEC.
Ranked #14 on Grammatical Error Correction on CoNLL-2014 Shared Task
no code implementations • LREC 2020 • Reo Hirao, Mio Arai, Hiroki Shimanaka, Satoru Katsumata, Mamoru Komachi
In this study, we created an automated essay scoring (AES) system for nonnative Japanese learners using an essay dataset with annotations for a holistic score and multiple trait scores, including content, organization, and language scores.
no code implementations • WS 2019 • Masahiro Kaneko, Kengo Hotate, Satoru Katsumata, Mamoru Komachi
Thus, it is not straightforward to utilize language representations trained from a large corpus, such as Bidirectional Encoder Representations from Transformers (BERT), in a form suitable for the learner{'}s grammatical errors.
no code implementations • WS 2019 • Satoru Katsumata, Mamoru Komachi
We introduce unsupervised techniques based on phrase-based statistical machine translation for grammatical error correction (GEC) trained on a pseudo learner corpus created by Google Translation.
no code implementations • 23 Jul 2019 • Satoru Katsumata, Mamoru Komachi
We introduce unsupervised techniques based on phrase-based statistical machine translation for grammatical error correction (GEC) trained on a pseudo learner corpus created by Google Translation.
no code implementations • ACL 2019 • Kengo Hotate, Masahiro Kaneko, Satoru Katsumata, Mamoru Komachi
In this paper, we propose a method for neural grammar error correction (GEC) that can control the degree of correction.
1 code implementation • ACL 2018 • Satoru Katsumata, Yukio Matsumura, Hayahide Yamagishi, Mamoru Komachi
For Japanese-to-English translation, this method achieves a BLEU score that is 0. 56 points more than that of a baseline.