An original model for multi-target learning of logical rules for knowledge graph reasoning

12 Dec 2021  ·  Yuliang Wei, Haotian Li, Guodong Xin, Yao Wang, Bailing Wang ·

Large-scale knowledge graphs provide structured representations of human knowledge. However, as it is impossible to collect all knowledge, knowledge graphs are usually incomplete. Reasoning based on existing facts paves a way to discover missing facts. In this paper, we study the problem of learning logical rules for reasoning on knowledge graphs for completing missing factual triplets. Learning logical rules equips a model with strong interpretability as well as the ability to generalize to similar tasks. We propose a model able to fully use training data which also considers multi-target scenarios. In addition, considering the deficiency in evaluating the performance of models and the quality of mined rules, we further propose two novel indicators to help with the problem. Experimental results empirically demonstrate that our model outperforms state-of-the-art methods on five benchmark datasets. The results also prove the effectiveness of the indicators.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here