Sensor Fusion
92 papers with code • 0 benchmarks • 2 datasets
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]
Benchmarks
These leaderboards are used to track progress in Sensor Fusion
Datasets
Latest papers
A Preliminary Study of Deep Learning Sensor Fusion for Pedestrian Detection
Additionally, a custom dataset of 60 images was proposed for training the architecture, with an additional 10 for evaluation and 10 for testing, giving a total of 80 images.
Radar-Camera Fusion for Object Detection and Semantic Segmentation in Autonomous Driving: A Comprehensive Review
Driven by deep learning techniques, perception technology in autonomous driving has developed rapidly in recent years, enabling vehicles to accurately detect and interpret surrounding environment for safe and efficient navigation.
TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation
A synthetic multi-modal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities.
Tightly-coupled Visual-DVL-Inertial Odometry for Robot-based Ice-water Boundary Exploration
The proposed method is validated with a data set collected in the field under frozen ice, and the result is compared with 6 other different sensor fusion setups.
A Modular Platform For Collaborative, Distributed Sensor Fusion
Leading autonomous vehicle (AV) platforms and testing infrastructures are, unfortunately, proprietary and closed-source.
DFR-FastMOT: Detection Failure Resistant Tracker for Fast Multi-Object Tracking Based on Sensor Fusion
The proposed solution enables superior performance under various distortion levels in detection over current state-of-art methods.
The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization
The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment, audio recorded by 12 microphones, and accurate six degrees of freedom (6DOF) pose ground truth of 0. 5 mm.
On Uncertainty in Deep State Space Models for Model-Based Reinforcement Learning
We show that RSSMs use a suboptimal inference scheme and that models trained using this inference overestimate the aleatoric uncertainty of the ground truth system.
Intelligent Resource Allocation in Joint Radar-Communication With Graph Neural Networks
In this paper, we propose a framework for intelligent vehicles to conduct JRC, with minimal prior knowledge of the system model and a tunable performance balance, in an environment where surrounding vehicles execute radar detection periodically, which is typical in contemporary protocols.
Multiagent Reinforcement Learning Based on Fusion-Multiactor-Attention-Critic for Multiple-Unmanned-Aerial-Vehicle Navigation Control
The feature that measures the total distance traveled by the UAVs is incorporated with the UAV LDS environment to validate the energy efficiency.