YOLOStereo3D: A Step Back to 2D for Efficient Stereo 3D Detection

17 Mar 2021  ·  Yuxuan Liu, Lujia Wang, Ming Liu ·

Object detection in 3D with stereo cameras is an important problem in computer vision, and is particularly crucial in low-cost autonomous mobile robots without LiDARs. Nowadays, most of the best-performing frameworks for stereo 3D object detection are based on dense depth reconstruction from disparity estimation, making them extremely computationally expensive. To enable real-world deployments of vision detection with binocular images, we take a step back to gain insights from 2D image-based detection frameworks and enhance them with stereo features. We incorporate knowledge and the inference structure from real-time one-stage 2D/3D object detector and introduce a light-weight stereo matching module. Our proposed framework, YOLOStereo3D, is trained on one single GPU and runs at more than ten fps. It demonstrates performance comparable to state-of-the-art stereo 3D detection frameworks without usage of LiDAR data. The code will be published in https://github.com/Owen-Liuyuxuan/visualDet3D.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Object Detection From Stereo Images KITTI Cars Moderate YoLoStereo3D AP75 41.25 # 8
3D Object Detection From Stereo Images KITTI Pedestrians Moderate YoLoStereo3D AP50 19.75 # 5

Methods


No methods listed for this paper. Add relevant methods here