GLASS: GNN with Labeling Tricks for Subgraph Representation Learning

ICLR 2022  ·  Xiyuan Wang, Muhan Zhang ·

Despite the remarkable achievements of Graph Neural Networks (GNNs) on graph representation learning, few works have tried to use them to predict properties of subgraphs in the whole graph. Existing state-of-the-art method SubGNN introduces an overly complicated subgraph-level GNN model which synthesizes 6 artificial properties, yet only has a marginal edge over a plain GNN. We find that the reason for the failure of directly applying a plain GNN to the whole graph and then pooling the representations of nodes within the subgraph is that it cannot tell whether nodes are in the subgraph or not when exchanging messages between them. With this insight, we introduce an expressive and scalable labeling trick, namely max-zero-one, and propose GLASS (GNN with LAbel for SubgraphS). Compared with SubGNN, GLASS is more expressive , more scalable and easier to implement. Experiments show that GLASS outperforms the strongest baseline by 13% on average. And ablation analysis shows that our max-zero-one labeling trick can boost the performance of a plain GNN up to 23%. And training a GLASS model only takes 28% time needed for a SubGNN on average.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here