Abstractive Text Summarization
328 papers with code • 19 benchmarks • 48 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Latest papers with no code
Exploiting Representation Bias for Data Distillation in Abstractive Text Summarization
We employ clustering techniques to learn the diversity of a model's sample space and how data points are mapped from the embedding space to the encoder space and vice versa.
Questioning Biases in Case Judgment Summaries: Legal Datasets or Large Language Models?
The evolution of legal datasets and the advent of large language models (LLMs) have significantly transformed the legal field, particularly in the generation of case judgment summaries.
Controllable Topic-Focused Abstractive Summarization
We show that our model sets a new state of the art on the NEWTS dataset in terms of topic-focused abstractive summarization as well as a topic-prevalence score.
Legal-HNet: Mixing Legal Long-Context Tokens with Hartley Transform
Since its introduction, the transformers architecture has seen great adoption in NLP applications, but it also has limitations.
Correction with Backtracking Reduces Hallucination in Summarization
The results show that CoBa is effective and efficient in reducing hallucination, and offers great adaptability and flexibility.
Clinfo.ai: An Open-Source Retrieval-Augmented Large Language Model System for Answering Medical Questions using Scientific Literature
The quickly-expanding nature of published medical literature makes it challenging for clinicians and researchers to keep up with and summarize recent, relevant findings in a timely manner.
Enhancing Abstractiveness of Summarization Models through Calibrated Distillation
Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries.
On Context Utilization in Summarization with Large Language Models
However, in question answering, language models exhibit uneven utilization of their input context.
Metric Ensembles For Hallucination Detection
Due to this need, a wide array of metrics estimating consistency with the text being summarized have been proposed.
Calibrating Likelihoods towards Consistency in Summarization Models
Despite the recent advances in abstractive text summarization, current summarization models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application.