ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation

26 Jan 2020  ยท  Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang ยท

Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks. To address this issue, we propose an enhanced multi-flow sequence to sequence pre-training and fine-tuning framework named ERNIE-GEN, which bridges the discrepancy between training and inference with an infilling generation mechanism and a noise-aware generation method. To make generation closer to human writing patterns, this framework introduces a span-by-span generation flow that trains the model to predict semantically-complete spans consecutively rather than predicting word by word. Unlike existing pre-training methods, ERNIE-GEN incorporates multi-granularity target sampling to construct pre-training data, which enhances the correlation between encoder and decoder. Experimental results demonstrate that ERNIE-GEN achieves state-of-the-art results with a much smaller amount of pre-training data and parameters on a range of language generation tasks, including abstractive summarization (Gigaword and CNN/DailyMail), question generation (SQuAD), dialogue generation (Persona-Chat) and generative question answering (CoQA).

PDF Abstract

Results from the Paper


 Ranked #1 on Question Generation on SQuAD1.1 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Abstractive Text Summarization CNN / Daily Mail ERNIE-GENLARGE (large-scale text corpora) ROUGE-1 44.31 # 15
ROUGE-2 21.35 # 15
ROUGE-L 41.60 # 8
Abstractive Text Summarization CNN / Daily Mail ERNIE-GENBASE ROUGE-1 42.30 # 26
ROUGE-2 19.92 # 24
ROUGE-L 39.68 # 26
Abstractive Text Summarization CNN / Daily Mail ERNIE-GENLARGE ROUGE-1 44.02 # 20
ROUGE-2 21.17 # 18
ROUGE-L 41.26 # 15
Generative Question Answering CoQA ERNIE-GEN F1-Score 84.5 # 1
Text Summarization GigaWord ERNIE-GENLARGE (large-scale text corpora) ROUGE-1 39.46 # 8
ROUGE-2 20.34 # 12
ROUGE-L 36.74 # 7
Text Summarization GigaWord ERNIE-GENLARGE ROUGE-1 39.25 # 11
ROUGE-2 20.25 # 13
ROUGE-L 36.53 # 13
Text Summarization GigaWord ERNIE-GENBASE ROUGE-1 38.83 # 17
ROUGE-2 20.04 # 15
ROUGE-L 36.20 # 16
Text Summarization GigaWord-10k ERNIE-GENBASE ROUGE-L 31.35 # 3
ROUGE-1 33.75 # 3
ROUGE-2 15.23 # 3
Text Summarization GigaWord-10k ERNIE-GENLARGE ROUGE-L 32.50 # 2
ROUGE-1 35.05 # 2
ROUGE-2 16.10 # 2
Text Summarization GigaWord-10k ERNIE-GENLARGE (large-scale text corpora) ROUGE-L 33.23 # 1
ROUGE-1 35.51 # 1
ROUGE-2 16.79 # 1
Question Generation SQuAD1.1 ERNIE-GENLARGE (beam size=5) BLEU-4 25.41 # 1

Methods