Sensor Fusion
92 papers with code • 0 benchmarks • 2 datasets
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]
Benchmarks
These leaderboards are used to track progress in Sensor Fusion
Datasets
Most implemented papers
Integrating Generic Sensor Fusion Algorithms with Sound State Representations through Encapsulation of Manifolds
Common estimation algorithms, such as least squares estimation or the Kalman filter, operate on a state in a state space S that is represented as a real-valued vector.
Distributed Mapping with Privacy and Communication Constraints: Lightweight Algorithms and Object-based Models
Our field tests show that the combined use of our distributed algorithms and object-based models reduces the communication requirements by several orders of magnitude and enables distributed mapping with large teams of robots in real-world problems.
Distributed Deep Neural Networks over the Cloud, the Edge and End Devices
In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.
DeLS-3D: Deep Localization and Segmentation with a 3D Semantic Map
The uniqueness of our design is a sensor fusion scheme which integrates camera videos, motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robustness and efficiency of the system.
Modular Sensor Fusion for Semantic Segmentation
Sensor fusion is a fundamental process in robotic systems as it extends the perceptual range and increases robustness in real-world operations.
Online Temporal Calibration for Monocular Visual-Inertial Systems
Visual and inertial fusion is a popular technology for 6-DOF state estimation in recent years.
Multimodal Sensor Fusion In Single Thermal image Super-Resolution
(III) A bench-mark ULB17-VT dataset that contains thermal images and their visual images counterpart is presented.
Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather
The fusion of multimodal sensor streams, such as camera, lidar, and radar measurements, plays a critical role in object detection for autonomous vehicles, which base their decision making on these inputs.
Tightly Coupled 3D Lidar Inertial Odometry and Mapping
By sensor fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable estimations.
Uncertainty Estimation in One-Stage Object Detection
Environment perception is the task for intelligent vehicles on which all subsequent steps rely.