The Card Shuffling Hypotheses: Building a Time and Memory Efficient Graph Convolutional Network

This paper investigates the design of time and memory efficient graph convolutional networks (GCNs). State-of-the-art GCNs adopt $K$-nearest neighbor (KNN) searches for local feature aggregation and feature extraction operations from layer to layer. Based on the mathematical analysis of existing graph convolution operations, we articulate the following two card shuffling hypotheses. (1) Shuffling the nearest neighbor selection for KNN searches in a multi-layered GCN approximately preserves the local geometric structures of 3D representations. (2) Shuffling the order of local feature aggregation and feature extraction leads to equivalent or similar composite operations for GCNs. The two hypotheses shed light on two possible directions of accelerating modern GCNs. That is, reasonable shuffling of the cards (neighbor selection or local feature operations) can significantly improve time and memory efficiency. A series of experiments show that the network architectures designed based on the proposed card shuffling hypotheses decrease both the time and memory consumption significantly (e.g., about 50% for point cloud classification and semantic segmentation), while maintaining comparable accuracy, on several important tasks in 3D deep learning, i.e., 3D classification, part segmentation, semantic segmentation, and mesh generation.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods