Sensor Fusion
88 papers with code • 0 benchmarks • 2 datasets
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]
Benchmarks
These leaderboards are used to track progress in Sensor Fusion
Datasets
Latest papers
DBA-Fusion: Tightly Integrating Deep Dense Visual Bundle Adjustment with Multiple Sensors for Large-Scale Localization and Mapping
Visual simultaneous localization and mapping (VSLAM) has broad applications, with state-of-the-art methods leveraging deep neural networks for better robustness and applicability.
GDTM: An Indoor Geospatial Tracking Dataset with Distributed Multimodal Sensors
Constantly locating moving objects, i. e., geospatial tracking, is essential for autonomous building infrastructure.
Multi-View Conformal Learning for Heterogeneous Sensor Fusion
Our results also showed that multi-view models generate prediction sets with less uncertainty compared to single-view models.
A-KIT: Adaptive Kalman-Informed Transformer
In this paper, we derive and introduce A-KIT, an adaptive Kalman-informed transformer to learn the varying process noise covariance online.
Autonomous Driving using Residual Sensor Fusion and Deep Reinforcement Learning
This paper proposes a novel approach by integrating sensor fusion with deep reinforcement learning, specifically the Soft Actor-Critic (SAC) algorithm, to develop an optimal control policy for self-driving cars.
Achelous++: Power-Oriented Water-Surface Panoptic Perception Framework on Edge Devices based on Vision-Radar Fusion and Pruning of Heterogeneous Modalities
Urban water-surface robust perception serves as the foundation for intelligent monitoring of aquatic environments and the autonomous navigation and operation of unmanned vessels, especially in the context of waterway safety.
LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition
However, most existing multimodal place recognition methods only use limited field-of-view camera images, which leads to an imbalance between features from different modalities and limits the effectiveness of sensor fusion.
RGB-X Object Detection via Scene-Specific Fusion Modules
Multimodal deep sensor fusion has the potential to enable autonomous vehicles to visually understand their surrounding environments in all weather conditions.
LeTFuser: Light-weight End-to-end Transformer-Based Sensor Fusion for Autonomous Driving with Multi-Task Learning
In end-to-end autonomous driving, the utilization of existing sensor fusion techniques and navigational control methods for imitation learning proves inadequate in challenging situations that involve numerous dynamic agents.
OceanBench: The Sea Surface Height Edition
It provides plug-and-play data and pre-configured pipelines for ML researchers to benchmark their models and a transparent configurable framework for researchers to customize and extend the pipeline for their tasks.