Multi-modal Entity Alignment
10 papers with code • 7 benchmarks • 4 datasets
Most implemented papers
Visual Pivoting for (Unsupervised) Entity Alignment
This work studies the use of visual semantic representations to align entities in heterogeneous knowledge graphs (KGs).
MMEA: Entity Alignment for Multi-Modal Knowledge Graphs
To that end, in this paper, we propose a novel solution called Multi-Modal Entity Alignment (MMEA) to address the problem of entity alignment in a multi-modal view.
Multi-modal Siamese Network for Entity Alignment
To deal with that problem, in this paper, we propose a novel Multi-modal Siamese Network for Entity Alignment (MSNEA) to align entities in different MMKGs, in which multi-modal knowledge could be comprehensively leveraged by the exploitation of inter-modal effect.
Multi-modal Contrastive Representation Learning for Entity Alignment
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs, which consist of structural triples and images associated with entities.
MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid
Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images.
Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment
As a crucial extension of entity alignment (EA), multi-modal entity alignment (MMEA) aims to identify identical entities across disparate knowledge graphs (KGs) by exploiting associated visual information.
Universal Multi-modal Entity Alignment via Iteratively Fusing Modality Similarity Paths
The objective of Entity Alignment (EA) is to identify equivalent entity pairs from multiple Knowledge Graphs (KGs) and create a more comprehensive and unified KG.
Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
To address these challenges, we propose a novel MMEA transformer, called MoAlign, that hierarchically introduces neighbor features, multi-modal attributes, and entity types to enhance the alignment task.
Towards Semantic Consistency: Dirichlet Energy Driven Robust Multi-Modal Entity Alignment
This study introduces a novel approach, DESAlign, which addresses these issues by applying a theoretical framework based on Dirichlet energy to ensure semantic consistency.
The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework
In this work, to evaluate models' ability to accurately embed entities within MMKGs, we focus on two widely researched tasks: Multi-modal Knowledge Graph Completion (MKGC) and Multi-modal Entity Alignment (MMEA).