Sensor Fusion

93 papers with code • 0 benchmarks • 2 datasets

Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]

Latest papers with no code

Secure Navigation using Landmark-based Localization in a GPS-denied Environment

no code yet • 22 Feb 2024

In modern battlefield scenarios, the reliance on GPS for navigation can be a critical vulnerability.

Landmark-based Localization using Stereo Vision and Deep Learning in GPS-Denied Battlefield Environment

no code yet • 19 Feb 2024

The proposed method utilizes a customcalibrated stereo vision camera for distance estimation and the YOLOv8s model, which is trained and fine-tuned with our real-world dataset for landmark recognition.

AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion

no code yet • 5 Feb 2024

Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring.

Fisheye Camera and Ultrasonic Sensor Fusion For Near-Field Obstacle Perception in Bird's-Eye-View

no code yet • 1 Feb 2024

Therefore, we present, to our knowledge, the first end-to-end multimodal fusion model tailored for efficient obstacle perception in a bird's-eye-view (BEV) perspective, utilizing fisheye cameras and ultrasonic sensors.

iMove: Exploring Bio-impedance Sensing for Fitness Activity Recognition

no code yet • 31 Jan 2024

While IMUs are currently the prominent fitness tracking modality, through iMove, we show bio-impedence can help improve IMU-based fitness tracking through sensor fusion and contrastive learning. To evaluate our methods, we conducted an experiment including six upper body fitness activities performed by ten subjects over five days to collect synchronized data from bio-impedance across two wrists and IMU on the left wrist. The contrastive learning framework uses the two modalities to train a better IMU-only classification model, where bio-impedance is only required at the training phase, by which the average Macro F1 score with the input of a single IMU was improved by 3. 22 \% reaching 84. 71 \% compared to the 81. 49 \% of the IMU baseline model.

Efficient Gesture Recognition on Spiking Convolutional Networks Through Sensor Fusion of Event-Based and Depth Data

no code yet • 30 Jan 2024

As intelligent systems become increasingly important in our daily lives, new ways of interaction are needed.

TUMTraf Event: Calibration and Fusion Resulting in a Dataset for Roadside Event-Based and RGB Cameras

no code yet • 16 Jan 2024

To the best of our knowledge, no targetless calibration between event-based and RGB cameras can handle multiple moving objects, nor does data fusion optimized for the domain of roadside ITS exist.

HawkRover: An Autonomous mmWave Vehicular Communication Testbed with Multi-sensor Fusion and Deep Learning

no code yet • 3 Jan 2024

Connected and automated vehicles (CAVs) have become a transformative technology that can change our daily life.

Experimental Validation of Sensor Fusion-based GNSS Spoofing Attack Detection Framework for Autonomous Vehicles

no code yet • 2 Jan 2024

To collect data, a vehicle equipped with a GNSS receiver, along with Inertial Measurement Unit (IMU) is used.

Learned Fusion: 3D Object Detection using Calibration-Free Transformer Feature Fusion

no code yet • 14 Dec 2023

The state of the art in 3D object detection using sensor fusion heavily relies on calibration quality, which is difficult to maintain in large scale deployment outside a lab environment.