MoCoSA: Momentum Contrast for Knowledge Graph Completion with Structure-Augmented Pre-trained Language Models

16 Aug 2023  ยท  Jiabang He, Liu Jia, Lei Wang, Xiyao Li, Xing Xu ยท

Knowledge Graph Completion (KGC) aims to conduct reasoning on the facts within knowledge graphs and automatically infer missing links. Existing methods can mainly be categorized into structure-based or description-based. On the one hand, structure-based methods effectively represent relational facts in knowledge graphs using entity embeddings. However, they struggle with semantically rich real-world entities due to limited structural information and fail to generalize to unseen entities. On the other hand, description-based methods leverage pre-trained language models (PLMs) to understand textual information. They exhibit strong robustness towards unseen entities. However, they have difficulty with larger negative sampling and often lag behind structure-based methods. To address these issues, in this paper, we propose Momentum Contrast for knowledge graph completion with Structure-Augmented pre-trained language models (MoCoSA), which allows the PLM to perceive the structural information by the adaptable structure encoder. To improve learning efficiency, we proposed momentum hard negative and intra-relation negative sampling. Experimental results demonstrate that our approach achieves state-of-the-art performance in terms of mean reciprocal rank (MRR), with improvements of 2.5% on WN18RR and 21% on OpenBG500.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Link Prediction FB15k-237 MoCoSA MRR 0.387 # 6
Hits@10 0.578 # 2
Hits@3 0.42 # 4
Hits@1 0.292 # 7
Link Prediction OpenBG500 MoCoSA MRR 0.634 # 1
Hits@1 0.531 # 1
Hits@3 0.711 # 1
Hits@10 0.83 # 1
Link Prediction WN18RR MoCoSA MRR 0.696 # 1
Hits@10 0.82 # 1
Hits@3 0.737 # 1
Hits@1 0.624 # 1

Methods