no code implementations • RANLP 2021 • Youki Itoh, Hiroyuki Shinnou
Herein, we propose a method for addressing the computational efficiency of pretraining models in domain shift by constructing an ELECTRA pretraining model on a Japanese dataset and additional pretraining this model in a downstream task using a corpus from the target domain.