Graph Propagation Transformer for Graph Representation Learning

19 May 2023  ·  Zhe Chen, Hao Tan, Tao Wang, Tianrun Shen, Tong Lu, Qiuying Peng, Cheng Cheng, Yue Qi ·

This paper presents a novel transformer architecture for graph representation learning. The core insight of our method is to fully consider the information propagation among nodes and edges in a graph when building the attention module in the transformer blocks. Specifically, we propose a new attention mechanism called Graph Propagation Attention (GPA). It explicitly passes the information among nodes and edges in three ways, i.e. node-to-node, node-to-edge, and edge-to-node, which is essential for learning graph-structured data. On this basis, we design an effective transformer architecture named Graph Propagation Transformer (GPTrans) to further help learn graph data. We verify the performance of GPTrans in a wide range of graph learning experiments on several benchmark datasets. These results show that our method outperforms many state-of-the-art transformer-based graph models with better performance. The code will be released at https://github.com/czczup/GPTrans.

PDF Abstract

Results from the Paper


Ranked #2 on Graph Regression on PCQM4M-LSC (Validation MAE metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Classification CLUSTER GPTrans-Nano Accuracy 78.07 # 4
Graph Property Prediction ogbg-molhiv GPTrans-B Test ROC-AUC 0.8126 ± 0.0032 # 9
Ext. data Yes # 1
Node Classification PATTERN GPTrans-Nano Accuracy 86.734±0.008 # 4
Graph Regression PCQM4M-LSC GPTrans-L Validation MAE 0.1151 # 2
Graph Regression PCQM4Mv2-LSC GPTrans-T Validation MAE 0.0833 # 7
Test MAE 0.0842 # 6
Graph Regression PCQM4Mv2-LSC GPTrans-L Validation MAE 0.0809 # 5
Test MAE 0.0821 # 5
Graph Regression ZINC-500k GPTrans-Nano MAE 0.077 # 8

Methods