Abstractive Text Summarization

326 papers with code • 19 benchmarks • 48 datasets

Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.

Source: Generative Adversarial Network for Abstractive Text Summarization

Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Libraries

Use these libraries to find Abstractive Text Summarization models and implementations

Most implemented papers

Deep Reinforcement Learning For Sequence to Sequence Models

yaserkl/RLSeq2Seq 24 May 2018

In this survey, we consider seq2seq problems from the RL point of view and provide a formulation combining the power of RL methods in decision-making with sequence-to-sequence models that enable remembering long-term memories.

Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting

ChenRocks/fast_abs_rl ACL 2018

Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i. e., compresses and paraphrases) to generate a concise overall summary.

Scoring Sentence Singletons and Pairs for Abstractive Summarization

ucfnlp/summarization-sing-pair-mix ACL 2019

There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs.

Unsupervised Opinion Summarization as Copycat-Review Generation

ixlan/CopyCat-abstractive-opinion-summarizer ACL 2020

At test time, when generating summaries, we force the novelty to be minimal, and produce a text reflecting consensus opinions.

UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training

microsoft/unilm 28 Feb 2020

We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM).

A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining

microsoft/HMNet Findings of the Association for Computational Linguistics 2020

With the abundance of automatic meeting transcripts, meeting summarization is of great interest to both participants and other parties.

Better Fine-Tuning by Reducing Representational Collapse

pytorch/fairseq ICLR 2021

Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods.

DebateSum: A large-scale argument mining and summarization dataset

Hellisotherpeople/DebateSum COLING (ArgMining) 2020

Finally, we present a search engine for this dataset which is utilized extensively by members of the National Speech and Debate Association today.

CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization

makcedward/nlpaug EMNLP 2021

We study generating abstractive summaries that are faithful and factually consistent with the given articles.