Leveraging Lead Bias for Zero-shot Abstractive News Summarization

25 Dec 2019  ·  Chenguang Zhu, Ziyi Yang, Robert Gmyr, Michael Zeng, Xuedong Huang ·

A typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary, it has a detrimental effect on teaching a model to discriminate and extract important information in general. We propose that this lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply self-supervised pre-training on this dataset to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods