To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on Resource Rich Tasks

15 Jun 2020 Sinong Wang Madian Khabsa Hao Ma

Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper