We present a further analysis of visual modality incompleteness, benchmarking latest MMEA models on our proposed dataset MMEA-UMVM.

To create our MMEA-UMVM(uncertainly missing visual modality) datasets, we perform random image dropping on MMEA datasets. Specifically, we randomly discard entity images to achieve varying degrees of visual modality missing, ranging from 0.05 to the maximum $R_{img}$ of the raw datasets with a step of 0.05 or 0.1. Finally, we get a total number of 97 data split.

Refer to the following paper for more details: Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment

Papers


Paper Code Results Date Stars

Dataset Loaders


Tasks


Similar Datasets