Multimodal Knowledge Learning for Named Entity Disambiguation

ACL ARR August 2021  ·  Anonymous ·

With the popularity of online social medias in recent years, massive-scale multimodal information has brought new challenges to traditional Named Entity Disambiguation (NED) tasks. Recently, Multimodal Named Entity Disambiguation (MNED) is proposed to link ambiguous mentions with the textual and visual contexts to a predefined knowledge graph. Recent attempts handle these issues mainly by annotating multimodal mentions and adding multimodal features to traditional NED models. These methods still suffer from 1) lack of multimodal annotation data against the huge scale of unlabeled corpus and 2) failing to model multimodal information at knowledge level. In this paper, we explore a pioneer study on leveraging multimodal knowledge learning to address the MNED task. Specifically, we propose a knowledge-guided transfer learning strategy to extract unified representation from different modalities and enrich multimodal lnowledge in a Meta Learning way which is much easier than collecting ambiguous mention corpus. Then we propose an Interactive Multimodal Learning Network (IMN), which is capable of fully utilizing the multimodal information in both mention and knowledge side. To verify the validity of the proposed method, we implemented comparisons on a public large-scale MNED dataset based on Twitter KB. Experimental results show that our method is superior to the state-of-the-art multimodal methods

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here