Sensor Fusion

89 papers with code • 0 benchmarks • 2 datasets

Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]

Latest papers with no code

COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against Semantic Attacks

no code yet • 4 Mar 2024

In this work, we propose the first robustness certification framework COMMIT certify robustness of multi-sensor fusion systems against semantic attacks.

OccFusion: A Straightforward and Effective Multi-Sensor Fusion Framework for 3D Occupancy Prediction

no code yet • 3 Mar 2024

This paper introduces OccFusion, a straightforward and efficient sensor fusion framework for predicting 3D occupancy.

RoadRunner - Learning Traversability Estimation for Autonomous Off-road Driving

no code yet • 29 Feb 2024

Furthermore, RoadRunner improves the system latency by a factor of roughly 4, from 500 ms to 140 ms, while improving the accuracy for traversability costs and elevation map predictions.

Comparative Analysis of XGBoost and Minirocket Algortihms for Human Activity Recognition

no code yet • 28 Feb 2024

This study investigates the efficacy of two ML algorithms, eXtreme Gradient Boosting (XGBoost) and MiniRocket, in the realm of HAR using data collected from smartphone sensors.

Secure Navigation using Landmark-based Localization in a GPS-denied Environment

no code yet • 22 Feb 2024

In modern battlefield scenarios, the reliance on GPS for navigation can be a critical vulnerability.

Landmark-based Localization using Stereo Vision and Deep Learning in GPS-Denied Battlefield Environment

no code yet • 19 Feb 2024

The proposed method utilizes a customcalibrated stereo vision camera for distance estimation and the YOLOv8s model, which is trained and fine-tuned with our real-world dataset for landmark recognition.

AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion

no code yet • 5 Feb 2024

Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring.

Fisheye Camera and Ultrasonic Sensor Fusion For Near-Field Obstacle Perception in Bird's-Eye-View

no code yet • 1 Feb 2024

Therefore, we present, to our knowledge, the first end-to-end multimodal fusion model tailored for efficient obstacle perception in a bird's-eye-view (BEV) perspective, utilizing fisheye cameras and ultrasonic sensors.

iMove: Exploring Bio-impedance Sensing for Fitness Activity Recognition

no code yet • 31 Jan 2024

While IMUs are currently the prominent fitness tracking modality, through iMove, we show bio-impedence can help improve IMU-based fitness tracking through sensor fusion and contrastive learning. To evaluate our methods, we conducted an experiment including six upper body fitness activities performed by ten subjects over five days to collect synchronized data from bio-impedance across two wrists and IMU on the left wrist. The contrastive learning framework uses the two modalities to train a better IMU-only classification model, where bio-impedance is only required at the training phase, by which the average Macro F1 score with the input of a single IMU was improved by 3. 22 \% reaching 84. 71 \% compared to the 81. 49 \% of the IMU baseline model.

Efficient Gesture Recognition on Spiking Convolutional Networks Through Sensor Fusion of Event-Based and Depth Data

no code yet • 30 Jan 2024

As intelligent systems become increasingly important in our daily lives, new ways of interaction are needed.