Neural Contourlet Network for Monocular 360 Depth Estimation

3 Aug 2022  ·  Zhijie Shen, Chunyu Lin, Lang Nie, Kang Liao, Yao Zhao ·

For a monocular 360 image, depth estimation is a challenging because the distortion increases along the latitude. To perceive the distortion, existing methods devote to designing a deep and complex network architecture. In this paper, we provide a new perspective that constructs an interpretable and sparse representation for a 360 image. Considering the importance of the geometric structure in depth estimation, we utilize the contourlet transform to capture an explicit geometric cue in the spectral domain and integrate it with an implicit cue in the spatial domain. Specifically, we propose a neural contourlet network consisting of a convolutional neural network and a contourlet transform branch. In the encoder stage, we design a spatial-spectral fusion module to effectively fuse two types of cues. Contrary to the encoder, we employ the inverse contourlet transform with learned low-pass subbands and band-pass directional subbands to compose the depth in the decoder. Experiments on the three popular panoramic image datasets demonstrate that the proposed approach outperforms the state-of-the-art schemes with faster convergence. Code is available at https://github.com/zhijieshen-bjtu/Neural-Contourlet-Network-for-MODE.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Depth Estimation Stanford2D3D Panoramic Neural Contourlet Network RMSE 0.3528 # 8
absolute relative error 0.0558 # 3

Methods


No methods listed for this paper. Add relevant methods here