Abstractive Text Summarization
327 papers with code • 19 benchmarks • 48 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Latest papers
FREDSum: A Dialogue Summarization Corpus for French Political Debates
In this paper, we present a dataset of French political debates for the purpose of enhancing resources for multi-lingual dialogue summarization.
AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation
Prior works on evaluating factual consistency of summarization often take the entailment-based approaches that first generate perturbed (factual inconsistent) summaries and then train a classifier on the generated data to detect the factually inconsistencies during testing time.
Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization
Despite the remarkable performance of generative large language models (LLMs) on abstractive summarization, they face two significant challenges: their considerable size and tendency to hallucinate.
Fair Abstractive Summarization of Diverse Perspectives
However, current work in summarization metrics and Large Language Models (LLMs) evaluation has not explored fair abstractive summarization.
GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization
The proposed models were thoroughly evaluated on the same dataset against GreekBART, which is the state-of-the-art model in Greek abstractive news summarization.
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation
In this paper, we address the hallucination problem commonly found in natural language generation tasks.
PartialFormer: Modeling Part Instead of Whole
The design choices in Transformer feed-forward neural networks have resulted in significant computational and parameter overhead.
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination Detection
Large Language Models (LLMs) have demonstrated remarkable human-level natural language generation capabilities.
Improving Summarization with Human Edits
Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likelihood training.
Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation
With the fast growth of LLM-enabled AI applications and democratization of open-souced LLMs, fine-tuning has become possible for non-expert individuals, but intensively performed LLM fine-tuning worldwide could result in significantly high energy consumption and carbon footprint, which may bring large environmental impact.