Recurrent Convolutions: A Model Compression Point of View

Recurrent convolution (RC) shares the same convolutional kernels and unrolls them multiple times, which is originally proposed to model time-space signals. We suggest that RC can be viewed as a model compression strategy for deep convolutional neural networks. RC reduces the redundancy across layers and is complementary to most existing model compression approaches. However, the performance of an RC network can't match the performance of its corresponding standard one, i.e. with the same depth but independent convolutional kernels. This reduces the value of RC for model compression. In this paper, we propose a simple variant which improves RC networks: The batch normalization layers of an RC module are learned independently (not shared) for different unrolling steps. We provide insights on why this works. Experiments on CIFAR show that unrolling a convolutional layer several steps can improve the performance, thus indirectly plays a role in model compression.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here