Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception

Low-cost, vision-centric 3D perception systems for autonomous driving have made significant progress in recent years, narrowing the gap to expensive LiDAR-based methods. The primary challenge in becoming a fully reliable alternative lies in robust depth prediction capabilities, as camera-based systems struggle with long detection ranges and adverse lighting and weather conditions. In this work, we introduce HyDRa, a novel camera-radar fusion architecture for diverse 3D perception tasks. Building upon the principles of dense BEV (Bird's Eye View)-based architectures, HyDRa introduces a hybrid fusion approach to combine the strengths of complementary camera and radar features in two distinct representation spaces. Our Height Association Transformer module leverages radar features already in the perspective view to produce more robust and accurate depth predictions. In the BEV, we refine the initial sparse representation by a Radar-weighted Depth Consistency. HyDRa achieves a new state-of-the-art for camera-radar fusion of 64.2 NDS (+1.8) and 58.4 AMOTA (+1.5) on the public nuScenes dataset. Moreover, our new semantically rich and spatially accurate BEV features can be directly converted into a powerful occupancy representation, beating all previous camera-based methods on the Occ3D benchmark by an impressive 3.7 mIoU.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Multi-Object Tracking nuScenes HyDRa AMOTA 0.584 # 56
3D Object Detection nuScenes HyDRa NDS 0.64 # 136
mAP 0.57 # 137
mATE 0.40 # 182
mASE 0.25 # 113
mAOE 0.42 # 109
mAVE 0.25 # 284
mAAE 0.12 # 269
3D Object Detection nuscenes Camera-Radar HyDRa NDS 64.2 # 1
3D Multi-Object Tracking nuscenes Camera-Radar HyDRa AMOTA 0.584 # 1
Prediction Of Occupancy Grid Maps Occ3D-nuScenes HyDRa R50 mIoU 44.4 # 4

Methods