CERT: Contrastive Self-supervised Learning for Language Understanding

16 May 2020Hongchao FangSicheng WangMeng ZhouJiayuan DingPengtao Xie

Pretrained language models such as BERT, GPT have shown great effectiveness in language understanding. The auxiliary predictive tasks in existing pretraining approaches are mostly defined on tokens, thus may not be able to capture sentence-level semantics very well... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper