Word Sense Disambiguation with Transformer Models

In this paper, we tackle the task of Word Sense Disambiguation (WSD). We present our system submitted to the Word-in-Context Target Sense Verification challenge, part of the SemDeep workshop at IJCAI 2020 (Breit et al., 2020). That challenge asks participants to predict if a specific mention of a word in a text matches a pre-defined sense. Our approach uses pre-trained transformer models such as BERT that are fine-tuned on the task using different architecture strategies. Our model achieves the best accuracy and precision on Subtask 1 – make use of definitions for deciding whether the target word in context corresponds to the given sense or not. We believe the strategies we explored in the context of this challenge can be useful to other Natural Language Processing tasks.

PDF Abstract SemDeep 2021 PDF SemDeep 2021 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Entity Linking WiC-TSV transformers Task 1 Accuracy: all 77.8 # 1
Task 1 Accuracy: general purpose 75.2 # 1
Task 1 Accuracy: domain specific 81.0 # 1
Task 3 Accuracy: all 71.9 # 4
Task 3 Accuracy: general purpose 77.0 # 2
Task 3 Accuracy: domain specific 65.7 # 5

Methods