Word Sense Disambiguation
143 papers with code • 15 benchmarks • 15 datasets
The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
Libraries
Use these libraries to find Word Sense Disambiguation models and implementationsDatasets
Latest papers
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
The results demonstrate that our proposed LaMini-LM models are comparable to competitive baselines, while being much smaller in size.
Semantic Specialization for Knowledge-based Word Sense Disambiguation
A promising approach for knowledge-based Word Sense Disambiguation (WSD) is to select the sense whose contextualized embeddings computed for its definition sentence are closest to those computed for a target word in a given sentence.
MWE as WSD: Solving Multiword Expression Identification with Word Sense Disambiguation
Recent approaches to word sense disambiguation (WSD) utilize encodings of the sense gloss (definition), in addition to the input context, to improve performance.
ChatGPT: Jack of all trades, master of none
Our comparison of its results with available State-of-the-Art (SOTA) solutions showed that the average loss in quality of the ChatGPT model was about 25% for zero-shot and few-shot evaluation.
Exploring the Benefits of Training Expert Language Models over Instruction Tuning
Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.
Hungry Hungry Hippos: Towards Language Modeling with State Space Models
First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.
Multiple Object Tracking Challenge Technical Report for Team MT_IoT
This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments.
Galactica: A Large Language Model for Science
We believe these results demonstrate the potential for language models as a new interface for science.
Multilingual Word Sense Disambiguation with Unified Sense Representation
As a key natural language processing (NLP) task, word sense disambiguation (WSD) evaluates how well NLP models can understand the lexical semantics of words under specific contexts.
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.