CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations

5 Nov 2021  ·  Tianyu Zhang, Yuxiang Ren, Wenzheng Feng, Weitao Du, Xuecang Zhang ·

Unsupervised graph representation learning is a non-trivial topic. The success of contrastive methods in the unsupervised representation learning on structured data inspires similar attempts on the graph. Existing graph contrastive learning (GCL) aims to learn the invariance across multiple augmentation views, which renders it heavily reliant on the handcrafted graph augmentations. However, inappropriate graph data augmentations can potentially jeopardize such invariance. In this paper, we show the potential hazards of inappropriate augmentations and then propose a novel Collaborative Graph Contrastive Learning framework (CGCL). This framework harnesses multiple graph encoders to observe the graph. Features observed from different encoders serve as the contrastive views in contrastive learning, which avoids inducing unstable perturbation and guarantees the invariance. To ensure the collaboration among diverse graph encoders, we propose the concepts of asymmetric architecture and complementary encoders as the design principle. To further prove the rationality, we utilize two quantitative metrics to measure the assembly of CGCL respectively. Extensive experiments demonstrate the advantages of CGCL in unsupervised graph-level representation learning and the potential of collaborative framework. The source code for reproducibility is available at https://github.com/zhangtia16/CGCL

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods