Sensor Fusion
93 papers with code • 0 benchmarks • 2 datasets
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]
Benchmarks
These leaderboards are used to track progress in Sensor Fusion
Datasets
Latest papers
MIPI 2022 Challenge on RGBW Sensor Fusion: Dataset and Report
A detailed description of all models developed in this challenge is provided in this paper.
Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
Large-scale deployment of autonomous vehicles has been continually delayed due to safety concerns.
HRFuser: A Multi-resolution Sensor Fusion Architecture for 2D Object Detection
Besides standard cameras, autonomous vehicles typically include multiple additional sensors, such as lidars and radars, which help acquire richer information for perceiving the content of the driving scene.
TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving
At the time of submission, TransFuser outperforms all prior work on the CARLA leaderboard in terms of driving score by a large margin.
BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system.
Deep Sensor Fusion with Pyramid Fusion Networks for 3D Semantic Segmentation
A novel Pyramid Fusion Backbone fuses these feature maps at different scales and combines the multimodal features in a feature pyramid to compute valuable multimodal, multi-scale features.
STCrowd: A Multimodal Dataset for Pedestrian Perception in Crowded Scenes
In addition, considering the property of sparse global distribution and density-varying local distribution of pedestrians, we further propose a novel method, Density-aware Hierarchical heatmap Aggregation (DHA), to enhance pedestrian perception in crowded scenes.
Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion
The ability to detect such anomalous behaviors is a key component in modern robots to achieve high-levels of autonomy.
Fusing Event-based and RGB camera for Robust Object Detection in Adverse Conditions
The ability to detect objects, under image corruptions and different weather conditions is vital for deep learning models especially when applied to real-world applications such as autonomous driving.
TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers
The attention mechanism of the transformer enables our model to adaptively determine where and what information should be taken from the image, leading to a robust and effective fusion strategy.