Adaptive Graph Capsule Convolutional Networks

29 Sep 2021  ·  Shangwei Wu, Yingtong Xiong, Chuliang Weng ·

In recent years, many studies utilize Convolutional Neural Networks (CNNs) to deal with non-grid graph data, known as Graph Convolutional Networks (GCNs). However, there exist two main restrictions of the prevalent GCNs. First, GCNs have a latent information loss problem since they use scalar-valued neurons rather than vector-valued ones to iterate through graph convolutions. Second, GCNs are presented statically with fixed architectures during training, which would limit their representation power. To tackle these two issues, based on a GNN model (CapsGNN) which encodes node embeddings as vectors, we propose Adaptive Graph Capsule Convolutional Networks (AdaGCCN) to adaptively adjust the model architecture at runtime. Specifically, we leverage Reinforcement Learning (RL) to design an assistant module for continuously selecting the optimal modification to the model structure through the whole training process. Moreover, we determine the architecture search space through analyzing the impacts of model's depth and width. To mitigate the computation overhead brought by the assistant module, we then deploy multiple workers to compute in parallel on GPU. Evaluations show that AdaGCCN achieves SOTA accuracy results and outperforms CapsGNN almost on all datasets in both bioinformatics and social fields. We also conduct experiments to indicate the efficiency of the paralleling strategy.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here