Sensor Fusion
89 papers with code • 0 benchmarks • 2 datasets
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]
Benchmarks
These leaderboards are used to track progress in Sensor Fusion
Datasets
Latest papers
Cognitive TransFuser: Semantics-guided Transformer-based Sensor Fusion for Improved Waypoint Prediction
Sensor fusion approaches for intelligent self-driving agents remain key to driving scene understanding given visual global contexts acquired from input sensors.
ROFusion: Efficient Object Detection using Hybrid Point-wise Radar-Optical Fusion
In this paper, we propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
MaskedFusion360: Reconstruct LiDAR Data by Querying Camera Features
In self-driving applications, LiDAR data provides accurate information about distances in 3D but lacks the semantic richness of camera data.
Towards a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data
Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field.
L2V2T2Calib: Automatic and Unified Extrinsic Calibration Toolbox for Different 3D LiDAR, Visual Camera and Thermal Camera
To unify the process, an important step is to automatically and robustly detect the target from different types of LiDARs.
Radar Enlighten the Dark: Enhancing Low-Visibility Perception for Automated Vehicles with Camera-Radar Fusion
Sensor fusion is a crucial augmentation technique for improving the accuracy and reliability of perception systems for automated vehicles under diverse driving conditions.
Leveraging BEV Representation for 360-degree Visual Place Recognition
In addition, the image and point cloud cues can be easily stated in the same coordinates, which benefits sensor fusion for place recognition.
Zenseact Open Dataset: A large-scale and diverse multimodal dataset for autonomous driving
The dataset is composed of Frames, Sequences, and Drives, designed to encompass both data diversity and support for spatio-temporal learning, sensor fusion, localization, and mapping.
A Preliminary Study of Deep Learning Sensor Fusion for Pedestrian Detection
Additionally, a custom dataset of 60 images was proposed for training the architecture, with an additional 10 for evaluation and 10 for testing, giving a total of 80 images.
Radar-Camera Fusion for Object Detection and Semantic Segmentation in Autonomous Driving: A Comprehensive Review
Driven by deep learning techniques, perception technology in autonomous driving has developed rapidly in recent years, enabling vehicles to accurately detect and interpret surrounding environment for safe and efficient navigation.