Sentence Fusion
19 papers with code • 1 benchmarks • 3 datasets
Sentence Fusion is the task of combining several independent sentences into a single coherent text. Sentence Fusion is important in many NLP applications, including retrieval-based dialogue, text summarization and question answering.
Source: DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion
Latest papers
Non-autoregressive Text Editing with Copy-aware Latent Alignments
In this work, we propose a novel non-autoregressive text editing method to circumvent the above issues, by modeling the edit process with latent CTC alignments.
RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans
The text editing tasks, including sentence fusion, sentence splitting and rephrasing, text simplification, and Grammatical Error Correction (GEC), share a common trait of dealing with highly similar input and output sequences.
Bridging Continuous and Discrete Spaces: Interpretable Sentence Representation Learning via Compositional Operations
It is unclear whether the compositional semantics of sentences can be directly reflected as compositional operations in the embedding space.
CoEdIT: Text Editing by Task-Specific Instruction Tuning
We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing (a total of 82K instructions).
Improving Iterative Text Revision by Learning Where to Edit from Other Revision Tasks
Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations.
ASDOT: Any-Shot Data-to-Text Generation with Pretrained Language Models
In the data disambiguation stage, we employ the prompted GPT-3 model to understand possibly ambiguous triples from the input data and convert each into a short sentence with reduced ambiguity.
Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
We demonstrate that SP-Search effectively represents the generative process behind human summaries using modules that are typically faithful to their intended behavior.
Extending Multi-Text Sentence Fusion Resources via Pyramid Annotations
NLP models that compare or consolidate information across multiple documents often struggle when challenged with recognizing substantial information redundancies across the texts.
Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution
Despite the prominence of neural abstractive summarization models, we know little about how they actually form summaries and how to understand where their decisions come from.
Data-to-Text Generation with Iterative Text Editing
Our approach maximizes the completeness and semantic accuracy of the output text while leveraging the abilities of recent pre-trained models for text editing (LaserTagger) and language modeling (GPT-2) to improve the text fluency.