Pre-train and Learn: Preserve Global Information for Graph Neural Networks

27 Oct 2019  ·  Danhao Zhu, Xin-yu Dai, Jia-Jun Chen ·

Graph neural networks (GNNs) have shown great power in learning on attributed graphs. However, it is still a challenge for GNNs to utilize information faraway from the source node. Moreover, general GNNs require graph attributes as input, so they cannot be appled to plain graphs. In the paper, we propose new models named G-GNNs (Global information for GNNs) to address the above limitations. First, the global structure and attribute features for each node are obtained via unsupervised pre-training, which preserve the global information associated to the node. Then, using the global features and the raw network attributes, we propose a parallel framework of GNNs to learn different aspects from these features. The proposed learning methods can be applied to both plain graphs and attributed graphs. Extensive experiments have shown that G-GNNs can outperform other state-of-the-art models on three standard evaluation graphs. Specially, our methods establish new benchmark records on Cora (84.31\%) and Pubmed (80.95\%) when learning on attributed graphs.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Classification CiteSeer with Public Split: fixed 20 nodes per class G-APPNP Accuracy 72% # 27
Node Classification Cora with Public Split: fixed 20 nodes per class G-APPNP Accuracy 84.31% # 8
Node Classification PubMed with Public Split: fixed 20 nodes per class G-APPNP Accuracy 80.95% # 9

Methods


No methods listed for this paper. Add relevant methods here