Graph Attention Multi-Layer Perceptron

9 Jun 2022  ·  Wentao Zhang, Ziqi Yin, Zeang Sheng, Yang Li, Wen Ouyang, Xiaosen Li, Yangyu Tao, Zhi Yang, Bin Cui ·

Graph neural networks (GNNs) have achieved great success in many graph-based applications. However, the enormous size and high sparsity level of graphs hinder their applications under industrial scenarios. Although some scalable GNNs are proposed for large-scale graphs, they adopt a fixed $K$-hop neighborhood for each node, thus facing the over-smoothing issue when adopting large propagation depths for nodes within sparse regions. To tackle the above issue, we propose a new GNN architecture -- Graph Attention Multi-Layer Perceptron (GAMLP), which can capture the underlying correlations between different scales of graph knowledge. We have deployed GAMLP in Tencent with the Angel platform, and we further evaluate GAMLP on both real-world datasets and large-scale industrial datasets. Extensive experiments on these 14 graph datasets demonstrate that GAMLP achieves state-of-the-art performance while enjoying high scalability and efficiency. Specifically, it outperforms GAT by 1.3\% regarding predictive accuracy on our large-scale Tencent Video dataset while achieving up to $50\times$ training speedup. Besides, it ranks top-1 on both the leaderboards of the largest homogeneous and heterogeneous graph (i.e., ogbn-papers100M and ogbn-mag) of Open Graph Benchmark.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Property Prediction ogbn-mag NARS-GAMLP+RLU Test Accuracy 0.5590 ± 0.0027 # 11
Validation Accuracy 0.5702 ± 0.0041 # 11
Number of params 6734882 # 20
Ext. data No # 1
Node Property Prediction ogbn-mag NARS-GAMLP Test Accuracy 0.5396 ± 0.0018 # 16
Validation Accuracy 0.5548 ± 0.0008 # 16
Number of params 6734882 # 20
Ext. data No # 1

Methods