Abstractive Text Summarization
323 papers with code • 19 benchmarks • 49 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Latest papers with no code
Improving Sequence-to-Sequence Models for Abstractive Text Summarization Using Meta Heuristic Approaches
As human society transitions into the information age, reduction in our attention span is a contingency, and people who spend time reading lengthy news articles are decreasing rapidly and the need for succinct information is higher than ever before.
From Instructions to Constraints: Language Model Alignment with Automatic Constraint Verification
We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.
Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in Low-Resource Languages
We find that the results are task and language dependent but find that the prompting method is the best on average across all tasks and languages.
A Second Look on BASS -- Boosting Abstractive Summarization with Unified Semantic Graphs -- A Replication Study
We present a detailed replication study of the BASS framework, an abstractive summarization system based on the notion of Unified Semantic Graphs.
VBART: The Turkish LLM
Our work shows that having a pre-trained LLM for Turkish outperforms up to 3x multilingual models, improving existing results and providing efficient models for training and inference.
EROS: Entity-Driven Controlled Policy Document Summarization
In this paper, we propose to enhance the interpretability and readability of policy documents by using controlled abstractive summarization -- we enforce the generated summaries to include critical privacy-related entities (e. g., data and medium) and organization's rationale (e. g., target and reason) in collecting those entities.
Layer-wise Regularized Dropout for Neural Language Models
To solve the inconsistency between training and inference caused by the randomness of dropout, some studies use consistency training to regularize dropout at the output layer.
Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Abstractive summarization models often generate factually inconsistent content particularly when the parametric knowledge of the model conflicts with the knowledge in the input document.
Analysis of Multidomain Abstractive Summarization Using Salience Allocation
This paper explores the realm of abstractive text summarization through the lens of the SEASON (Salience Allocation as Guidance for Abstractive SummarizatiON) technique, a model designed to enhance summarization by leveraging salience allocation techniques.
A Hybrid Strategy for Chat Transcript Summarization
Text summarization is the process of condensing a piece of text to fewer sentences, while still preserving its content.