WinoGrande: An Adversarial Winograd Schema Challenge at Scale

24 Jul 2019  ·  Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi ·

The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern 2011), a benchmark for commonsense reasoning, is a set of 273 expert-crafted pronoun resolution problems originally designed to be unsolvable for statistical models that rely on selectional preferences or word associations. However, recent advances in neural language models have already reached around 90% accuracy on variants of WSC. This raises an important question whether these models have truly acquired robust commonsense capabilities or whether they rely on spurious biases in the datasets that lead to an overestimation of the true capabilities of machine commonsense. To investigate this question, we introduce WinoGrande, a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. The best state-of-the-art methods on WinoGrande achieve 59.4-79.1%, which are 15-35% below human performance of 94.0%, depending on the amount of the training data allowed. Furthermore, we establish new state-of-the-art results on five related benchmarks - WSC (90.1%), DPR (93.1%), COPA (90.6%), KnowRef (85.6%), and Winogender (97.1%). These results have dual implications: on one hand, they demonstrate the effectiveness of WinoGrande when used as a resource for transfer learning. On the other hand, they raise a concern that we are likely to be overestimating the true capabilities of machine commonsense across all these benchmarks. We emphasize the importance of algorithmic bias reduction in existing and future benchmarks to mitigate such overestimation.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering COPA Causal Strength w/multi-word predicates (presumably on WinoGrande?) Accuracy 76.4 # 43
Question Answering COPA Pointwise Mutual Information (on 10M stories) Accuracy 65.4 # 53
Question Answering COPA RoBERTa-Winogrande-ft 355M (fine-tuned) Accuracy 90.6 # 17
Question Answering COPA RoBERTa-ft 355M (fine-tuned) Accuracy 86.4 # 24
Question Answering COPA RoBERTa-Winogrande 355M (fine-tuned) Accuracy 84.4 # 30
Coreference Resolution Winograd Schema Challenge RoBERTa-WinoGrande 355M Accuracy 90.1 # 9
Coreference Resolution Winograd Schema Challenge KEE+NKAM on WinoGrande Accuracy 52.8 # 74
Coreference Resolution Winograd Schema Challenge WKH Accuracy 57.1 # 65
Coreference Resolution Winograd Schema Challenge RoBERTa-DPR 355M Accuracy 83.1 # 19
Common Sense Reasoning WinoGrande RoBERTa-DPR 355M (0-shot) Accuracy 58.9 # 49
Common Sense Reasoning WinoGrande BERT-DPR 345M (0-shot) Accuracy 51 # 70
Common Sense Reasoning WinoGrande RoBERTa-large 355M (0-shot) Accuracy 50 # 72
Common Sense Reasoning WinoGrande BERT-large 345M (0-shot) Accuracy 51.9 # 67
Common Sense Reasoning WinoGrande RoBERTa-Winogrande 355M (fine-tuned) Accuracy 79.1 # 15
Common Sense Reasoning WinoGrande BERT-Winogrande 345M (fine-tuned) Accuracy 64.9 # 41

Methods


No methods listed for this paper. Add relevant methods here