Self-supervised 360$^{\circ}$ Room Layout Estimation

30 Mar 2022  ·  Hao-Wen Ting, Cheng Sun, Hwann-Tzong Chen ·

We present the first self-supervised method to train panoramic room layout estimation models without any labeled data. Unlike per-pixel dense depth that provides abundant correspondence constraints, layout representation is sparse and topological, hindering the use of self-supervised reprojection consistency on images. To address this issue, we propose Differentiable Layout View Rendering, which can warp a source image to the target camera pose given the estimated layout from the target image. As each rendered pixel is differentiable with respect to the estimated layout, we can now train the layout estimation model by minimizing reprojection loss. Besides, we introduce regularization losses to encourage Manhattan alignment, ceiling-floor alignment, cycle consistency, and layout stretch consistency, which further improve our predictions. Finally, we present the first self-supervised results on ZilloIndoor and MatterportLayout datasets. Our approach also shows promising solutions in data-scarce scenarios and active learning, which would have an immediate value in the real estate virtual tour software. Code is available at https://github.com/joshua049/Stereo-360-Layout.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here