Rethinking Temperature in Graph Contrastive Learning

29 Sep 2021  ·  Ziyang Liu, Hao Feng, Chaokun Wang ·

Due to not relying on the rare human-labeled information, self-supervised learning, especially contrastive learning, attracted much attention from researchers. It has begun to show its strong advantages on both IID data (independent and identically distributed data, such as images and texts) and Non-IID data (such as nodes in graphs). Recently, researchers begin to explore the interpretability of contrastive learning and have proposed some metrics for measuring the learned representations' qualities of IID data, such as alignment, uniformity, and semantic closeness. It is very important to understand the relationships among node representations, which is helpful to design algorithms with stronger interpretability. However, few studies focus on evaluating good node representations in graph contrastive learning. In this paper, we investigate and discuss what a good representation should be for a general loss (InfoNCE) in graph contrastive learning. By theoretical analysis, we argue that global uniformity and local separation are both necessary to the learning quality. We find that the two new metrics can be regulated by the temperature coefficient in InfoNCE loss. Based on this characteristic, we develop a simple but effective algorithm GLATE to dynamically adjust the temperature value in the training phase. GLATE outperforms the state-of-the-art graph contrastive learning algorithms 2.8 and 0.9 percent on average under the transductive and inductive learning tasks, respectively. The code is available at: https://github.com/anonymousICLR22/GLATE.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods