Unified Language Model Pre-training for Natural Language Understanding and Generation

This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UniLM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UniLM achieves new state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract

Datasets


Introduced in the Paper:

Liu et al. Corpus

Used in the Paper:

GLUE SQuAD CNN/Daily Mail CoQA

Results from the Paper


Ranked #2 on Generative Question Answering on CoQA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Abstractive Text Summarization CNN / Daily Mail UniLM ROUGE-1 43.08 # 25
ROUGE-2 20.43 # 21
ROUGE-L 40.34 # 24
Document Summarization CNN / Daily Mail UniLM (Abstractive Summarization) ROUGE-1 43.08 # 13
ROUGE-2 20.43 # 10
ROUGE-L 40.34 # 11
Generative Question Answering CoQA UniLM F1-Score 82.5 # 2
Text Summarization GigaWord UniLM ROUGE-1 38.90 # 16
ROUGE-2 20.05 # 14
ROUGE-L 36.00 # 17
Question Generation SQuAD1.1 UniLM BLEU-4 22.78 # 8
METEOR 25.1 # 6
ROUGE-L 51.1 # 6

Methods