ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

29 Jul 2019  ·  Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang ·

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing. Current pre-training procedures usually focus on training the model with several simple tasks to grasp the co-occurrence of words or sentences. However, besides co-occurring, there exists other valuable lexical, syntactic and semantic information in training corpora, such as named entity, semantic closeness and discourse relations. In order to extract to the fullest extent, the lexical, syntactic and semantic information from training corpora, we propose a continual pre-training framework named ERNIE 2.0 which builds and learns incrementally pre-training tasks through constant multi-task learning. Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese. The source codes and pre-trained models have been released at https://github.com/PaddlePaddle/ERNIE.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Chinese Sentence Pair Classification BQ ERNIE 2.0 Large Accuracy 85.2 # 2
Chinese Sentence Pair Classification BQ ERNIE 2.0 Base Accuracy 85.0 # 3
Chinese Sentence Pair Classification BQ Dev ERNIE 2.0 Large Accuracy 86.5 # 1
Chinese Sentence Pair Classification BQ Dev ERNIE 2.0 Base Accuracy 86.4 # 2
Chinese Sentiment Analysis ChnSentiCorp ERNIE 2.0 Large Accuracy 95.8 # 1
Chinese Sentiment Analysis ChnSentiCorp ERNIE 2.0 Base Accuracy 95.5 # 2
Chinese Sentiment Analysis ChnSentiCorp Dev ERNIE 2.0 Large Accuracy 96.1 # 1
Chinese Sentiment Analysis ChnSentiCorp Dev ERNIE 2.0 Base Accuracy 95.7 # 2
Chinese Reading Comprehension CMRC 2018 (Simplified Chinese) Dev ERNIE 2.0 Base EM 69.1 # 1
Chinese Reading Comprehension CMRC 2018 (Simplified Chinese) Dev ERNIE 2.0 Large EM 28.5 # 3
Linguistic Acceptability CoLA ERNIE 2.0 Base Accuracy 55.2% # 34
Linguistic Acceptability CoLA ERNIE 2.0 Large Accuracy 63.5% # 27
Chinese Reading Comprehension DRCD (Traditional Chinese) ERNIE 2.0 Large EM 89 # 2
Chinese Reading Comprehension DRCD (Traditional Chinese) ERNIE 2.0 Base EM 88.0 # 3
Chinese Reading Comprehension DRCD (Traditional Chinese) Dev ERNIE 2.0 Large EM 89.7 # 1
Chinese Reading Comprehension DRCD (Traditional Chinese) Dev ERNIE 2.0 Base EM 88.5 # 3
Open-Domain Question Answering DuReader ERNIE 2.0 Large EM 64.2 # 1
Open-Domain Question Answering DuReader ERNIE 2.0 Base EM 61.3 # 2
Chinese Sentence Pair Classification LCQMC ERNIE 2.0 Base Accuracy 87.9 # 2
Chinese Sentence Pair Classification LCQMC ERNIE 2.0 Large Accuracy 87.9 # 2
Chinese Sentence Pair Classification LCQMC Dev ERNIE 2.0 Base Accuracy 90.9 # 1
Chinese Sentence Pair Classification LCQMC Dev ERNIE 2.0 Large Accuracy 90.9 # 1
Semantic Textual Similarity MRPC ERNIE 2.0 Base Accuracy 86.1% # 33
Semantic Textual Similarity MRPC ERNIE 2.0 Large Accuracy 87.4% # 28
Chinese Named Entity Recognition MSRA ERNIE 2.0 Base F1 93.8 # 14
Chinese Named Entity Recognition MSRA ERNIE 2.0 Large F1 95 # 10
Chinese Named Entity Recognition MSRA Dev ERNIE 2.0 Large F1 96.3 # 1
Chinese Named Entity Recognition MSRA Dev ERNIE 2.0 Base F1 95.2 # 2
Natural Language Inference MultiNLI ERNIE 2.0 Large Matched 88.7 # 13
Mismatched 88.8 # 9
Natural Language Inference MultiNLI ERNIE 2.0 Base Matched 86.1 # 26
Mismatched 85.5 # 18
Chinese Sentence Pair Classification NLPCC-DBQA ERNIE 2.0 Large MRR 95.8 # 1
Chinese Sentence Pair Classification NLPCC-DBQA ERNIE 2.0 Base MRR 95.7 # 2
Chinese Sentence Pair Classification NLPCC-DBQA Dev ERNIE 2.0 Base MRR 95.7 # 2
Chinese Sentence Pair Classification NLPCC-DBQA Dev ERNIE 2.0 Large MRR 95.9 # 1
Natural Language Inference QNLI ERNIE 2.0 Base Accuracy 92.9% # 24
Natural Language Inference QNLI ERNIE 2.0 Large Accuracy 94.6% # 14
Question Answering Quora Question Pairs ERNIE 2.0 Large Accuracy 90.1% # 7
Question Answering Quora Question Pairs ERNIE 2.0 Base Accuracy 89.8% # 10
Natural Language Inference RTE ERNIE 2.0 Large Accuracy 80.2% # 35
Natural Language Inference RTE ERNIE 2.0 Base Accuracy 74.8% # 45
Sentiment Analysis SST-2 Binary classification ERNIE 2.0 Base Accuracy 95 # 25
Semantic Textual Similarity STS Benchmark ERNIE 2.0 Large Pearson Correlation 0.912 # 12
Semantic Textual Similarity STS Benchmark ERNIE 2.0 Base Pearson Correlation 0.876 # 23
Natural Language Inference WNLI ERNIE 2.0 Large Accuracy 67.8 # 19
Natural Language Inference XNLI Chinese ERNIE 2.0 Base Accuracy 79.7 # 2
Natural Language Inference XNLI Chinese ERNIE 2.0 Large Accuracy 81 # 1
Natural Language Inference XNLI Chinese Dev ERNIE 2.0 Base Accuracy 81.2 # 2
Natural Language Inference XNLI Chinese Dev ERNIE 2.0 Large Accuracy 82.6 # 1

Methods