Cloze Test

28 papers with code • 2 benchmarks • 1 datasets

The cloze task refers to infilling individual words.

Datasets


Most implemented papers

SiBert: Enhanced Chinese Pre-trained Language Model with Sentence Insertion

ewrfcas/SiBert LREC 2020

However, some studies show that customized self-supervised tasks for a particular type of downstream task can effectively help the pre-trained model to capture more corresponding knowledge and semantic information.

Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward

luyang-huang96/GraphAugmentedSum ACL 2020

Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.

On the Robustness of Language Encoders against Grammatical Errors

uclanlp/ProbeGrammarRobustness ACL 2020

We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors.

MC-BERT: Efficient Language Pre-Training via a Meta Controller

MC-BERT/MC-BERT 10 Jun 2020

Pre-trained contextual representations (e. g., BERT) have become the foundation to achieve state-of-the-art results on many NLP tasks.

Explainable Inference on Sequential Data via Memory-Tracking

KRLGroup/explainable-inference-on-sequential-data-via-memory-tracking 11 Jul 2020

Our results show that we are able to explain agent’s decisions in (1) and to reconstruct the most relevant sentences used by the network to select the story ending in (2).

Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events

yuguangnudt/VEC_VAD 27 Aug 2020

To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE).

Reasoning about Goals, Steps, and Temporal Ordering with WikiHow

zharry29/wikihow-goal-step EMNLP 2020

We propose a suite of reasoning tasks on two types of relations between procedural events: goal-step relations ("learn poses" is a step in the larger goal of "doing yoga") and step-step temporal relations ("buy a yoga mat" typically precedes "learn poses").

A BERT-based Dual Embedding Model for Chinese Idiom Prediction

VisualJoyce/ChengyuBERT COLING 2020

Specifically, we first match the embedding of each candidate idiom with the hidden representation corresponding to the blank in the context.

Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations

chanind/tensor-theorem-prover 26 Nov 2020

Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the "knowledge acquisition" problem).