Abstractive Text Summarization
325 papers with code • 19 benchmarks • 48 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Latest papers with no code
VBART: The Turkish LLM
Our work shows that having a pre-trained LLM for Turkish outperforms up to 3x multilingual models, improving existing results and providing efficient models for training and inference.
EROS: Entity-Driven Controlled Policy Document Summarization
In this paper, we propose to enhance the interpretability and readability of policy documents by using controlled abstractive summarization -- we enforce the generated summaries to include critical privacy-related entities (e. g., data and medium) and organization's rationale (e. g., target and reason) in collecting those entities.
Layer-wise Regularized Dropout for Neural Language Models
To solve the inconsistency between training and inference caused by the randomness of dropout, some studies use consistency training to regularize dropout at the output layer.
Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Abstractive summarization models often generate factually inconsistent content particularly when the parametric knowledge of the model conflicts with the knowledge in the input document.
Analysis of Multidomain Abstractive Summarization Using Salience Allocation
This paper explores the realm of abstractive text summarization through the lens of the SEASON (Salience Allocation as Guidance for Abstractive SummarizatiON) technique, a model designed to enhance summarization by leveraging salience allocation techniques.
A Hybrid Strategy for Chat Transcript Summarization
Text summarization is the process of condensing a piece of text to fewer sentences, while still preserving its content.
GUMsley: Evaluating Entity Salience in Summarization for 12 English Genres
As NLP models become increasingly capable of understanding documents in terms of coherent entities rather than strings, obtaining the most salient entities for each document is not only an important end task in itself but also vital for Information Retrieval (IR) and other downstream applications such as controllable summarization.
Evaluating GPT-3.5's Awareness and Summarization Abilities for European Constitutional Texts with Shared Topics
Constitutions are foundational legal documents that underpin the governmental and societal structures.
Cross-Domain Robustness of Transformer-based Keyphrase Generation
In our experiments, abstractive text summarization models fine-tuned for keyphrase generation show quite high results for a target text corpus.
Exploiting Representation Bias for Data Distillation in Abstractive Text Summarization
We employ clustering techniques to learn the diversity of a model's sample space and how data points are mapped from the embedding space to the encoder space and vice versa.