Cloze Test
28 papers with code • 2 benchmarks • 1 datasets
The cloze task refers to infilling individual words.
Most implemented papers
SiBert: Enhanced Chinese Pre-trained Language Model with Sentence Insertion
However, some studies show that customized self-supervised tasks for a particular type of downstream task can effectively help the pre-trained model to capture more corresponding knowledge and semantic information.
Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward
Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.
On the Robustness of Language Encoders against Grammatical Errors
We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors.
MC-BERT: Efficient Language Pre-Training via a Meta Controller
Pre-trained contextual representations (e. g., BERT) have become the foundation to achieve state-of-the-art results on many NLP tasks.
Explainable Inference on Sequential Data via Memory-Tracking
Our results show that we are able to explain agent’s decisions in (1) and to reconstruct the most relevant sentences used by the network to select the story ending in (2).
Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events
To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE).
Reasoning about Goals, Steps, and Temporal Ordering with WikiHow
We propose a suite of reasoning tasks on two types of relations between procedural events: goal-step relations ("learn poses" is a step in the larger goal of "doing yoga") and step-step temporal relations ("buy a yoga mat" typically precedes "learn poses").
A BERT-based Dual Embedding Model for Chinese Idiom Prediction
Specifically, we first match the embedding of each candidate idiom with the hidden representation corresponding to the blank in the context.
Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations
Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the "knowledge acquisition" problem).
Probing for Bridging Inference in Transformer Language Models
We probe pre-trained transformer language models for bridging inference.