Pure Message Passing Can Estimate Common Neighbor for Link Prediction

2 Sep 2023  ·  Kaiwen Dong, Zhichun Guo, Nitesh V. Chawla ·

Message Passing Neural Networks (MPNNs) have emerged as the {\em de facto} standard in graph representation learning. However, when it comes to link prediction, they often struggle, surpassed by simple heuristics such as Common Neighbor (CN). This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN. To bridge this gap, we posit that, by harnessing the orthogonality of input vectors, pure message-passing can indeed capture joint structural features. Specifically, we study the proficiency of MPNNs in approximating CN heuristics. Based on our findings, we introduce the Message Passing Link Predictor (MPLP), a novel link prediction model. MPLP taps into quasi-orthogonal vectors to estimate link-level structural features, all while preserving the node-level complexities. Moreover, our approach demonstrates that leveraging message-passing to capture structural features could offset MPNNs' expressiveness limitations at the expense of estimation variance. We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Link Property Prediction ogbl-citation2 MPLP Test MRR 0.9072 ± 0.0012 # 1
Validation MRR 0.9074 ± 0.0011 # 1
Number of params 749757283 # 1
Ext. data No # 1
Link Property Prediction ogbl-ppa MPLP Test Hits@100 0.6524 ± 0.0150 # 2
Validation Hits@100 0.6685 ± 0.0073 # 2
Number of params 147794531 # 2
Ext. data No # 1

Methods


No methods listed for this paper. Add relevant methods here