DEVIANT: Depth EquiVarIAnt NeTwork for Monocular 3D Object Detection

21 Jul 2022  ·  Abhinav Kumar, Garrick Brazil, Enrique Corona, Armin Parchami, Xiaoming Liu ·

Modern neural networks use building blocks such as convolutions that are equivariant to arbitrary 2D translations. However, these vanilla blocks are not equivariant to arbitrary 3D translations in the projective manifold. Even then, all monocular 3D detectors use vanilla blocks to obtain the 3D coordinates, a task for which the vanilla blocks are not designed for. This paper takes the first step towards convolutions equivariant to arbitrary 3D translations in the projective manifold. Since the depth is the hardest to estimate for monocular detection, this paper proposes Depth EquiVarIAnt NeTwork (DEVIANT) built with existing scale equivariant steerable blocks. As a result, DEVIANT is equivariant to the depth translations in the projective manifold whereas vanilla networks are not. The additional depth equivariance forces the DEVIANT to learn consistent depth estimates, and therefore, DEVIANT achieves state-of-the-art monocular 3D detection results on KITTI and Waymo datasets in the image-only category and performs competitively to methods using extra information. Moreover, DEVIANT works better than vanilla networks in cross-dataset evaluation. Code and models at https://github.com/abhi1kumar/DEVIANT

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Object Detection From Monocular Images KITTI-360 DEVIANT AP50 0.88 # 5
AP25 26.96 # 7
Monocular 3D Object Detection KITTI Cars Moderate DEVIANT AP Medium 14.46 # 7
3D Object Detection From Monocular Images Waymo Open Dataset DEVIANT 3D mAPH Vehicle (Front Camera Only) 2.52 # 1

Methods


No methods listed for this paper. Add relevant methods here