CiteSum: Citation Text-guided Scientific Extreme Summarization and Domain Adaptation with Limited Supervision

12 May 2022  ยท  Yuning Mao, Ming Zhong, Jiawei Han ยท

Scientific extreme summarization (TLDR) aims to form ultra-short summaries of scientific papers. Previous efforts on curating scientific TLDR datasets failed to scale up due to the heavy human annotation and domain expertise required. In this paper, we propose a simple yet effective approach to automatically extracting TLDR summaries for scientific papers from their citation texts. Based on the proposed approach, we create a new benchmark CiteSum without human annotation, which is around 30 times larger than the previous human-curated dataset SciTLDR. We conduct a comprehensive analysis of CiteSum, examining its data characteristics and establishing strong baselines. We further demonstrate the usefulness of CiteSum by adapting models pre-trained on CiteSum (named CITES) to new tasks and domains with limited supervision. For scientific extreme summarization, CITES outperforms most fully-supervised methods on SciTLDR without any fine-tuning and obtains state-of-the-art results with only 128 examples. For news extreme summarization, CITES achieves significant gains on XSum over its base model (not pre-trained on CiteSum), e.g., +7.2 ROUGE-1 zero-shot performance and state-of-the-art few-shot performance. For news headline generation, CITES performs the best among unsupervised and zero-shot methods on Gigaword. Our dataset and code can be found at https://github.com/morningmoni/CiteSum.

PDF Abstract

Datasets


Introduced in the Paper:

CiteSum

Used in the Paper:

S2ORC ScisummNet SciTLDR TalkSumm

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Extreme Summarization CiteSum EXT-ORACLE ROUGE-1 44.17 # 1
ROUGE-2 27.22 # 1
ROUGE-L 38.32 # 1
Extreme Summarization CiteSum EXT-HEURISTIC ROUGE-1 29.32 # 8
ROUGE-2 12.53 # 8
ROUGE-L 23.99 # 8
Extreme Summarization CiteSum EXT-LEAD ROUGE-1 21.94 # 9
ROUGE-2 7.35 # 9
ROUGE-L 17.36 # 9
Extreme Summarization CiteSum BART-large (s=abs, t= TLDR/Title/Disci) ROUGE-1 41.89 # 4
ROUGE-2 19.51 # 2
ROUGE-L 33.73 # 3
Extreme Summarization CiteSum BART-large (s=abs, t=TLDR/title) ROUGE-1 41.85 # 6
ROUGE-2 19.21 # 6
ROUGE-L 33.42 # 7
Extreme Summarization CiteSum BART-large (s=abs+disci, t=TLDR) ROUGE-1 42.01 # 3
ROUGE-2 19.34 # 5
ROUGE-L 33.72 # 4
Extreme Summarization CiteSum BART-large (s=abs+title, t=TLDR) ROUGE-1 42.02 # 2
ROUGE-2 19.44 # 3
ROUGE-L 33.78 # 2
Extreme Summarization CiteSum PEGASUS (s=abs, t=TLDR) ROUGE-1 41.56 # 7
ROUGE-2 18.63 # 7
ROUGE-L 33.45 # 6
Extreme Summarization CiteSum BART-large (s=abs, t=TLDR) ROUGE-1 41.86 # 5
ROUGE-2 19.36 # 4
ROUGE-L 33.72 # 4

Methods