Abstractive Text Summarization

325 papers with code • 19 benchmarks • 48 datasets

Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.

Source: Generative Adversarial Network for Abstractive Text Summarization

Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Libraries

Use these libraries to find Abstractive Text Summarization models and implementations

Most implemented papers

Neural Abstractive Text Summarization with Sequence-to-Sequence Models

tshi04/NATS 5 Dec 2018

As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization.

Encode, Tag, Realize: High-Precision Text Editing

google-research/lasertagger IJCNLP 2019

We propose LaserTagger - a sequence tagging approach that casts text generation as a text editing task.

Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond

theamrzaki/text_summurization_abstractive_methods CONLL 2016

In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora.

Global Encoding for Abstractive Summarization

lancopku/Global-Encoding ACL 2018

To tackle the problem, we propose a global encoding framework, which controls the information flow from the encoder to the decoder based on the global information of the source context.

Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization

dascim/acl2018_abssumm ACL 2018

We introduce a novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations.

Pay Less Attention with Lightweight and Dynamic Convolutions

pytorch/fairseq ICLR 2019

We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements.

Pretraining-Based Natural Language Generation for Text Summarization

nayeon7lee/bert-summarization CONLL 2019

For the decoder, there are two stages in our model, in the first stage, we use a Transformer-based decoder to generate a draft output sequence.

Evaluating the Factual Consistency of Abstractive Text Summarization

yuhui-zh15/FactCCX EMNLP 2020

Currently used metrics for assessing summarization algorithms do not account for whether summaries are factually consistent with source documents.

ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training

microsoft/ProphetNet 13 Jan 2020

This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism.

ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation

PaddlePaddle/ERNIE 26 Jan 2020

Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.