Abstractive Text Summarization

328 papers with code • 19 benchmarks • 48 datasets

Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.

Source: Generative Adversarial Network for Abstractive Text Summarization

Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Libraries

Use these libraries to find Abstractive Text Summarization models and implementations

Latest papers with no code

Exploiting Representation Bias for Data Distillation in Abstractive Text Summarization

no code yet • 10 Dec 2023

We employ clustering techniques to learn the diversity of a model's sample space and how data points are mapped from the embedding space to the encoder space and vice versa.

Questioning Biases in Case Judgment Summaries: Legal Datasets or Large Language Models?

no code yet • 1 Dec 2023

The evolution of legal datasets and the advent of large language models (LLMs) have significantly transformed the legal field, particularly in the generation of case judgment summaries.

Controllable Topic-Focused Abstractive Summarization

no code yet • 12 Nov 2023

We show that our model sets a new state of the art on the NEWTS dataset in terms of topic-focused abstractive summarization as well as a topic-prevalence score.

Legal-HNet: Mixing Legal Long-Context Tokens with Hartley Transform

no code yet • 9 Nov 2023

Since its introduction, the transformers architecture has seen great adoption in NLP applications, but it also has limitations.

Correction with Backtracking Reduces Hallucination in Summarization

no code yet • 24 Oct 2023

The results show that CoBa is effective and efficient in reducing hallucination, and offers great adaptability and flexibility.

Clinfo.ai: An Open-Source Retrieval-Augmented Large Language Model System for Answering Medical Questions using Scientific Literature

no code yet • 24 Oct 2023

The quickly-expanding nature of published medical literature makes it challenging for clinicians and researchers to keep up with and summarize recent, relevant findings in a timely manner.

Enhancing Abstractiveness of Summarization Models through Calibrated Distillation

no code yet • 20 Oct 2023

Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries.

On Context Utilization in Summarization with Large Language Models

no code yet • 16 Oct 2023

However, in question answering, language models exhibit uneven utilization of their input context.

Metric Ensembles For Hallucination Detection

no code yet • 16 Oct 2023

Due to this need, a wide array of metrics estimating consistency with the text being summarized have been proposed.

Calibrating Likelihoods towards Consistency in Summarization Models

no code yet • 12 Oct 2023

Despite the recent advances in abstractive text summarization, current summarization models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application.