Multi-View 3D Object Detection Network for Autonomous Driving

CVPR 2017  ·  Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, Tian Xia ·

This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Birds Eye View Object Detection KITTI Cars Easy val MV (BV+FV) AP 86.18 # 2
3D Object Detection KITTI Cars Easy val MV3D AP 71.29 # 9
3D Object Detection KITTI Cars Easy val MV3D (LiDAR) AP 71.19 # 10
Birds Eye View Object Detection KITTI Cars Hard val MV (BV+FV) AP 76.33 # 2
3D Object Detection KITTI Cars Hard val MV3D AP 56.56 # 9
3D Object Detection KITTI Cars Moderate val MV3D AP 62.68 # 10
Birds Eye View Object Detection KITTI Cars Moderate val MV (BV+FV) AP 77.32 # 3

Methods


No methods listed for this paper. Add relevant methods here