Abstractive Text Summarization

323 papers with code • 19 benchmarks • 49 datasets

Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.

Source: Generative Adversarial Network for Abstractive Text Summarization

Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Libraries

Use these libraries to find Abstractive Text Summarization models and implementations

Latest papers with no code

Improving Sequence-to-Sequence Models for Abstractive Text Summarization Using Meta Heuristic Approaches

no code yet • 24 Mar 2024

As human society transitions into the information age, reduction in our attention span is a contingency, and people who spend time reading lengthy news articles are decreasing rapidly and the need for succinct information is higher than ever before.

From Instructions to Constraints: Language Model Alignment with Automatic Constraint Verification

no code yet • 10 Mar 2024

We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.

Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in Low-Resource Languages

no code yet • 9 Mar 2024

We find that the results are task and language dependent but find that the prompting method is the best on average across all tasks and languages.

A Second Look on BASS -- Boosting Abstractive Summarization with Unified Semantic Graphs -- A Replication Study

no code yet • 5 Mar 2024

We present a detailed replication study of the BASS framework, an abstractive summarization system based on the notion of Unified Semantic Graphs.

VBART: The Turkish LLM

no code yet • 2 Mar 2024

Our work shows that having a pre-trained LLM for Turkish outperforms up to 3x multilingual models, improving existing results and providing efficient models for training and inference.

EROS: Entity-Driven Controlled Policy Document Summarization

no code yet • 29 Feb 2024

In this paper, we propose to enhance the interpretability and readability of policy documents by using controlled abstractive summarization -- we enforce the generated summaries to include critical privacy-related entities (e. g., data and medium) and organization's rationale (e. g., target and reason) in collecting those entities.

Layer-wise Regularized Dropout for Neural Language Models

no code yet • 26 Feb 2024

To solve the inconsistency between training and inference caused by the randomness of dropout, some studies use consistency training to regularize dropout at the output layer.

Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models

no code yet • 23 Feb 2024

Abstractive summarization models often generate factually inconsistent content particularly when the parametric knowledge of the model conflicts with the knowledge in the input document.

Analysis of Multidomain Abstractive Summarization Using Salience Allocation

no code yet • 19 Feb 2024

This paper explores the realm of abstractive text summarization through the lens of the SEASON (Salience Allocation as Guidance for Abstractive SummarizatiON) technique, a model designed to enhance summarization by leveraging salience allocation techniques.

A Hybrid Strategy for Chat Transcript Summarization

no code yet • 2 Feb 2024

Text summarization is the process of condensing a piece of text to fewer sentences, while still preserving its content.