Word Sense Disambiguation

143 papers with code • 15 benchmarks • 15 datasets

The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:

“A mouse consists of an object held in one's hand, with one or more buttons.”

we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).

Libraries

Use these libraries to find Word Sense Disambiguation models and implementations

LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions

mbzuai-nlp/lamini-lm 27 Apr 2023

The results demonstrate that our proposed LaMini-LM models are comparable to competitive baselines, while being much smaller in size.

801
27 Apr 2023

Semantic Specialization for Knowledge-based Word Sense Disambiguation

s-mizuki-nlp/semantic_specialization_for_wsd 22 Apr 2023

A promising approach for knowledge-based Word Sense Disambiguation (WSD) is to select the sense whose contextualized embeddings computed for its definition sentence are closest to those computed for a target word in a given sentence.

0
22 Apr 2023

MWE as WSD: Solving Multiword Expression Identification with Word Sense Disambiguation

mindful/mweaswsd 12 Mar 2023

Recent approaches to word sense disambiguation (WSD) utilize encodings of the sense gloss (definition), in addition to the input context, to improve performance.

3
12 Mar 2023

ChatGPT: Jack of all trades, master of none

clarin-pl/chatgpt-evaluation-01-2023 21 Feb 2023

Our comparison of its results with available State-of-the-Art (SOTA) solutions showed that the average loss in quality of the ChatGPT model was about 25% for zero-shot and few-shot evaluation.

28
21 Feb 2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

joeljang/elm 7 Feb 2023

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

96
07 Feb 2023

Hungry Hungry Hippos: Towards Language Modeling with State Space Models

hazyresearch/safari 28 Dec 2022

First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.

841
28 Dec 2022

Multiple Object Tracking Challenge Technical Report for Team MT_IoT

BingfengYan/DS_OCSORT 7 Dec 2022

This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments.

11
07 Dec 2022

Galactica: A Large Language Model for Science

paperswithcode/galai 16 Nov 2022

We believe these results demonstrate the potential for language models as a new interface for science.

2,647
16 Nov 2022

Multilingual Word Sense Disambiguation with Unified Sense Representation

suytingwan/multilingual-wsd COLING 2022

As a key natural language processing (NLP) task, word sense disambiguation (WSD) evaluates how well NLP models can understand the lexical semantics of words under specific contexts.

0
14 Oct 2022

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

seonghyeonye/flipped-learning 6 Oct 2022

Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.

109
06 Oct 2022