Entity Disambiguation
57 papers with code • 11 benchmarks • 12 datasets
Entity Disambiguation is the task of linking mentions of ambiguous entities to their referent entities in a knowledge base such as Wikipedia.
Source: Leveraging Deep Neural Networks and Knowledge Graphs for Entity Disambiguation
Datasets
Latest papers
Fast and Effective Biomedical Entity Linking Using a Dual Encoder
Additionally, we modify our dual encoder model for end-to-end biomedical entity linking that performs both mention span detection and entity disambiguation and out-performs two recently proposed models.
Entity Linking in 100 Languages
We propose a new formulation for multilingual entity linking, where language-specific mentions resolve to a language-agnostic Knowledge Base.
Bootleg: Chasing the Tail with Self-Supervised Named Entity Disambiguation
A challenge for named entity disambiguation (NED), the task of mapping textual mentions to entities in a knowledge base, is how to disambiguate entities that appear rarely in the training data, termed tail entities.
Autoregressive Entity Retrieval
For instance, Encyclopedias such as Wikipedia are structured by entities (e. g., one per Wikipedia article).
PNEL: Pointer Network based End-To-End Entity Linking over Knowledge Graphs
In such a pipeline, Entity Linking (EL) is often the first step.
Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Models
We further hypothesize that our proposed KG context can be standardized for Wikipedia, and we evaluate the impact of KG context on state-of-the-art NED model for the Wikipedia knowledge base.
Improving Broad-Coverage Medical Entity Linking with Semantic Type Prediction and Large-Scale Datasets
To address the dearth of annotated training data for medical entity linking, we present WikiMed and PubMedDS, two large-scale medical entity linking datasets, and demonstrate that pre-training MedType on these datasets further improves entity linking performance.
A Recurrent Model for Collective Entity Linking with Adaptive Features
Traditional machine learning based methods for NED were outperformed and made obsolete by the state-of-the-art deep learning based models.
Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking
We show on an entity linking benchmark that (i) this model improves the entity representations over plain BERT, (ii) that it outperforms entity linking architectures that optimize the tasks separately and (iii) that it only comes second to the current state-of-the-art that does mention detection and entity disambiguation jointly.
Learning Dynamic Context Augmentation for Global Entity Linking
Despite of the recent success of collective entity linking (EL) methods, these "global" inference methods may yield sub-optimal results when the "all-mention coherence" assumption breaks, and often suffer from high computational cost at the inference stage, due to the complex search space.