Extractive Summarization as Text Matching

This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems. Instead of following the commonly used framework of extracting sentences individually and modeling the relationship between sentences, we formulate the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be (extracted from the original text) matched in a semantic space. Notably, this paradigm shift to semantic matching framework is well-grounded in our comprehensive analysis of the inherent gap between sentence-level and summary-level extractors based on the property of the dataset. Besides, even instantiating the framework with a simple form of a matching model, we have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1). Experiments on the other five datasets also show the effectiveness of the matching framework. We believe the power of this matching-based summarization framework has not been fully exploited. To encourage more instantiations in the future, we have released our codes, processed dataset, as well as generated summaries in https://github.com/maszhongming/MatchSum.

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Summarization BBC XSum MatchSum ROUGE-1 24.86 # 1
ROUGE-2 4.66 # 1
ROUGE-L 18.41 # 1
Extractive Text Summarization CNN / Daily Mail MatchSum ROUGE-2 20.86 # 2
ROUGE-1 44.41 # 2
ROUGE-L 40.55 # 2
Document Summarization CNN / Daily Mail MatchSum (RoBERTa-base) ROUGE-1 44.41 # 6
ROUGE-2 20.86 # 8
ROUGE-L 40.55 # 9
Document Summarization CNN / Daily Mail MatchSum (BERT-base) ROUGE-1 44.22 # 8
ROUGE-2 20.62 # 9
ROUGE-L 40.38 # 10
Text Summarization Pubmed MatchSum (BERT-base) ROUGE-1 41.21 # 25
ROUGE-2 14.91 # 21
ROUGE-L 36.75 # 17
Text Summarization Reddit TIFU MatchSum ROUGE-1 25.09 # 5
ROUGE-2 6.17 # 5
ROUGE-L 20.13 # 5
Text Summarization WikiHow MatchSum (BERT-base) ROUGE-1 31.85 # 2
ROUGE-2 8.98 # 3
ROUGE-L 29.58 # 2

Methods


No methods listed for this paper. Add relevant methods here