FEATURE-AUGMENTED HYPERGRAPH NEURAL NETWORKS

29 Sep 2021  ·  Xueqi Ma, Pan Li, Qiong Cao, James Bailey, Yue Gao ·

Graph neural networks (GNNs) and their variants have demonstrated superior performance in learning graph representations by aggregating features based on graph or hypergraph structures. However, it is becoming evident that most exist- ing graph-based GNNs are susceptible to over-smoothing and are non-robust to perturbations. For representation learning tasks, hypergraphs usually have more expressive power than graphs through their ability to encode higher-order data correlations. In this paper, we propose Feature-Augmented Hypergraph Neural Networks (FAHGNN) focusing on hypergraph structures. In FAHGNN, we explore the influence of node features for the expressive power of GNNs and augment features by introducing common features and personal features to model information. Specifically, for a node, the common features contain the shared information with other nodes in hyperedges, while the personal features represent its special information. In this way, the feature types each possess different distinguishing powers. Considering the different properties of these two kinds of features, we design different propagation strategies for information aggregation on hypergraphs. Furthermore, during the propagation process, we further augment features by randomly dropping node features. We leverage consistency regularization across different data augmentations of the two feature types to optimize the prediction consistency for the model. Extensive experiments on several benchmarks show that FAHGNN significantly outperforms other state-of-the-art methods for node classification tasks. Our theoretical study and experimental results further support the effectiveness of FAHGNN for mitigating issues of over-smoothing and enhancing the robustness of the model.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here