Text Infilling
20 papers with code • 0 benchmarks • 1 datasets
Text Infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words.
Benchmarks
These leaderboards are used to track progress in Text Infilling
Most implemented papers
Language modeling via stochastic processes
Recent work in self-supervised learning suggests that models can learn good latent representations via contrastive learning, which can be effective for discriminative tasks.
CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation
Existing reference-free metrics have obvious limitations for evaluating controlled text generation models.
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
In this work, we adapt prompt-based few-shot learning to ELECTRA and show that it outperforms masked language models in a wide range of tasks.
Reprogramming Pretrained Language Models for Antibody Sequence Infilling
Results on antibody design benchmarks show that our model on low-resourced antibody sequence dataset provides highly diverse CDR sequences, up to more than a two-fold increase of diversity over the baselines, without losing structural integrity and naturalness.
MetaFill: Text Infilling for Meta-Path Generation on Heterogeneous Information Networks
Meta-path, a sequence of node types and edge types, is the core technique to embed HINs.
Generative Prompt Tuning for Relation Classification
Current prompt tuning methods mostly convert the downstream tasks to masked language modeling problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces.
Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Across 5 NLP datasets, 4 adversarial attacks, and 3 different models, MVP improves performance against adversarial substitutions by an average of 8% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3. 5%.
MAGVLT: Masked Generative Vision-and-Language Transformer
Particularly, MAGVLT achieves competitive results on both zero-shot image-to-text and text-to-image generation tasks from MS-COCO by one moderate-sized model (fewer than 500M parameters) even without the use of monomodal data and networks.
A Simple yet Effective Framework for Few-Shot Aspect-Based Sentiment Analysis
In this work, we argue that two kinds of gaps, i. e., domain gap and objective gap, hinder the transfer of knowledge from pre-training language models (PLMs) to ABSA tasks.
Probabilistically-sound beam search with masked language models
Beam search with masked language models (MLMs) is challenging in part because joint probability distributions over sequences are not readily available, unlike for autoregressive models.