How Frequency Effect Graph Neural Networks

29 Sep 2021  ·  Xueqi Ma, Yubo Zhang, Weifeng Liu, Yue Gao ·

Graph neural networks (GNNs) have been demonstrated powerful expressiveness on graph representation with different message passing schemes, but fail to improve the prediction performance by stacking layers because of over-smoothing. The researches of frequency principle on deep neural networks motivated us to explore the effect of frequency on designing deep GNNs. In this work, we decompose input features into low-frequency and high-frequency signals and analyze the performance of different frequencies on GNNs as the depth increases. We prove that low-frequency signals can be learned faster in GNNs, i.e., easier to suffer from over-smoothing than high-frequency signals. Based on the frequency principle on GNNs, we present a novel powerful GNNs framework, Multi-Scale Frequency Enhanced Graph Neural Networks (MSF-GNNs) which considers multi-scale representations from wavelet decomposition. Specifically, we design an information propagation rule which considers the properties of different frequency signals and exploits the advantages of different frequency signals for better node representation. To enhance the consistent output of multi-scale representation, we utilize consistency regularized loss. Extensive experiments have demonstrated the effectiveness of proposed MSF-GNNs on node classification compared to state-of-the-art methods. The theoretical study and experimental results further show the effectiveness of MSF-GNNs on relieving the issues of over-smoothing.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here