WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations

By design, word embeddings are unable to model the dynamic nature of words' semantics, i.e., the property of words to correspond to potentially different meanings. To address this limitation, dozens of specialized meaning representation techniques such as sense or contextualized embeddings have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling of the standard evaluation dataset for the purpose, i.e., Stanford Contextual Word Similarity, and highlight its shortcomings. To address the lack of a suitable benchmark, we put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of context-sensitive representations. WiC is released in https://pilehvar.github.io/wic/.

PDF Abstract NAACL 2019 PDF NAACL 2019 Abstract

Datasets


Introduced in the Paper:

WiC
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Word Sense Disambiguation Words in Context Sentence LSTM Accuracy 53.1 # 24
Word Sense Disambiguation Words in Context SW2V Accuracy 58.1 # 19
Word Sense Disambiguation Words in Context DeConf Accuracy 58.7 # 18
Word Sense Disambiguation Words in Context BERT-large 340M Accuracy 65.5 # 14
Word Sense Disambiguation Words in Context ElMo Accuracy 57.7 # 20
Word Sense Disambiguation Words in Context Context2vec Accuracy 59.3 # 17

Methods


No methods listed for this paper. Add relevant methods here