KR-BERT: A Small-Scale Korean-Specific Language Model

10 Aug 2020 Sangah Lee Hansol Jang Yunmee Baik Suzi Park Hyopil Shin

Since the appearance of BERT, recent works including XLNet and RoBERTa utilize sentence embedding models pre-trained by large corpora and a large number of parameters. Because such models have large hardware and a huge amount of data, they take a long time to pre-train... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper