Abstractive Text Summarization
327 papers with code • 19 benchmarks • 48 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Most implemented papers
Locally Typical Sampling
Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, locally typical sampling offers competitive performance (in both abstractive summarization and story generation) in terms of quality while consistently reducing degenerate repetitions.
BRIO: Bringing Order to Abstractive Summarization
Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary.
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation
Applying Reinforcement Learning (RL) to sequence generation models enables the direct optimization of long-term rewards (\textit{e. g.,} BLEU and human feedback), but typically requires large-scale sampling over a space of action sequences.
Diversity driven Attention Model for Query-based Abstractive Summarization
Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion.
Query-Based Abstractive Summarization Using Neural Networks
In this paper, we present a model for generating summaries of text documents with respect to a query.
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
Neural abstractive summarization models have led to promising results in summarizing relatively short documents.
MeanSum: A Neural Model for Unsupervised Multi-document Abstractive Summarization
Our proposed model consists of an auto-encoder where the mean of the representations of the input reviews decodes to a reasonable summary-review while not relying on any review-specific features.
Abstractive Summarization Using Attentive Neural Techniques
However, we show that these metrics are limited in their ability to effectively score abstractive summaries, and propose a new approach based on the intuition that an abstractive model requires an abstractive evaluation.
Pragmatically Informative Text Generation
We improve the informativeness of models for conditional text generation using techniques from computational pragmatics.
Sample Efficient Text Summarization Using a Single Pre-Trained Transformer
Language model (LM) pre-training has resulted in impressive performance and sample efficiency on a variety of language understanding tasks.