Text Infilling

20 papers with code • 0 benchmarks • 1 datasets

Text Infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words.

Source: Enabling Language Models to Fill in the Blanks

Datasets


Most implemented papers

Enabling Language Models to Fill in the Blanks

chrisdonahue/ilm ACL 2020

We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.

Nutribullets Hybrid: Multi-document Health Summarization

atulkum/pointer_summarizer 8 Apr 2021

We present a method for generating comparative summaries that highlights similarities and contradictions in input documents.

LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and Generation

thu-coai/LOT-LongLM 30 Aug 2021

Therefore, we propose a story-centric benchmark named LOT for evaluating Chinese long text modeling, which aggregates two understanding tasks and two generation tasks.

Text Infilling

VegB/Text_Infilling 1 Jan 2019

Recent years have seen remarkable progress of text generation in different contexts, such as the most common setting of generating text from scratch, and the emerging paradigm of retrieval-and-rewriting.

TIGS: An Inference Algorithm for Text Infilling with Gradient Search

dayihengliu/Text-Infilling-Gradient-Search ACL 2019

Text infilling is defined as a task for filling in the missing part of a sentence or paragraph, which is suitable for many real-world natural language generation scenarios.

Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange

styfeng/SMERTI IJCNLP 2019

In this paper, we present a novel method for measurably adjusting the semantics of text while preserving its sentiment and fluency, a task we call semantic text exchange.

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

qkaren/unsup_gen_for_cms_reasoning EMNLP 2020

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.

Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting

michaelzhouwang/sequence_span_rewriting EMNLP 2021

In this paper, we generalize text infilling (e. g., masked language models) by proposing Sequence Span Rewriting (SSR) as a self-supervised sequence-to-sequence (seq2seq) pre-training objective.

Show Me How To Revise: Improving Lexically Constrained Sentence Generation with XLNet

nlpcode/mcmcxlnet 13 Sep 2021

To overcome this challenge, we used a classifier to instruct the MCMC-based models where and how to refine the candidate sentences.

Conformal prediction for text infilling and part-of-speech prediction

jackferrellncsu/drums-nlp-codesnapshot 4 Nov 2021

In our paper, we propose inductive conformal prediction (ICP) algorithms for the tasks of text infilling and part-of-speech (POS) prediction for natural language data.