RRNet: Repetition-Reduction Network for Energy Efficient Decoder of Depth Estimation

23 Jul 2019  ·  Sang-Yun Oh, Hye-Jin S. Kim, Jongeun Lee, Junmo Kim ·

We introduce Repetition-Reduction network (RRNet) for resource-constrained depth estimation, offering significantly improved efficiency in terms of computation, memory and energy consumption. The proposed method is based on repetition-reduction (RR) blocks. The RR blocks consist of the set of repeated convolutions and the residual connection layer that take place of the pointwise reduction layer with linear connection to the decoder. The RRNet help reduce memory usage and power consumption in the residual connections to the decoder layers. RRNet consumes approximately 3.84 times less energy and 3.06 times less meory and is approaximately 2.21 times faster, without increasing the demand on hardware resource relative to the baseline network (Godard et al, CVPR'17), outperforming current state-of-the-art lightweight architectures such as SqueezeNet, ShuffleNet, MobileNet and PyDNet.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods