Compact Global Descriptor for Neural Networks

23 Jul 2019  ·  Xiangyu He, Ke Cheng, Qiang Chen, Qinghao Hu, Peisong Wang, Jian Cheng ·

Long-range dependencies modeling, widely used in capturing spatiotemporal correlation, has shown to be effective in CNN dominated computer vision tasks. Yet neither stacks of convolutional operations to enlarge receptive fields nor recent nonlocal modules is computationally efficient. In this paper, we present a generic family of lightweight global descriptors for modeling the interactions between positions across different dimensions (e.g., channels, frames). This descriptor enables subsequent convolutions to access the informative global features with negligible computational complexity and parameters. Benchmark experiments show that the proposed method can complete state-of-the-art long-range mechanisms with a significant reduction in extra computing cost. Code available at https://github.com/HolmesShuan/Compact-Global-Descriptor.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Object Detection COCO test-dev Faster R-CNN + FPN + CGD box mAP 37.9 # 208
Object Detection COCO test-dev MobielNet-v1-SSD-300x300+CGD box mAP 21.4 # 236
Image Classification ImageNet MobileNet-224 (CGD) Top 1 Accuracy 72.56% # 921
Number of params 4.26M # 386
GFLOPs 1.198 # 113

Methods