Segmented Graph-Bert for Graph Instance Modeling

9 Feb 2020  ·  Jiawei Zhang ·

In graph instance representation learning, both the diverse graph instance sizes and the graph node orderless property have been the major obstacles that render existing representation learning models fail to work. In this paper, we will examine the effectiveness of GRAPH-BERT on graph instance representation learning, which was designed for node representation learning tasks originally. To adapt GRAPH-BERT to the new problem settings, we re-design it with a segmented architecture instead, which is also named as SEG-BERT (Segmented GRAPH-BERT) for reference simplicity in this paper. SEG-BERT involves no node-order-variant inputs or functional components anymore, and it can handle the graph node orderless property naturally. What's more, SEG-BERT has a segmented architecture and introduces three different strategies to unify the graph instance sizes, i.e., full-input, padding/pruning and segment shifting, respectively. SEG-BERT is pre-trainable in an unsupervised manner, which can be further transferred to new tasks directly or with necessary fine-tuning. We have tested the effectiveness of SEG-BERT with experiments on seven graph instance benchmark datasets, and SEG-BERT can out-perform the comparison methods on six out of them with significant performance advantages.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Graph Classification COLLAB SEG-BERT Accuracy 78.42% # 18
Graph Classification IMDb-B SEG-BERT Accuracy 77.2% # 7
Graph Classification IMDb-M SEG-BERT Accuracy 53.4% # 10
Graph Classification MUTAG SEG-BERT Accuracy 90.85% # 17
Graph Classification PROTEINS SEG-BERT Accuracy 77.09% # 30
Graph Classification PTC SEG-BERT Accuracy 68.86% # 15

Methods


No methods listed for this paper. Add relevant methods here