Browse SoTA > Miscellaneous > Sensor Fusion

Sensor Fusion

22 papers with code · Miscellaneous

Leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Online Temporal Calibration for Monocular Visual-Inertial Systems

2 Aug 2018HKUST-Aerial-Robotics/VINS-Mono

Visual and inertial fusion is a popular technology for 6-DOF state estimation in recent years.

AUTONOMOUS DRIVING ROBOT NAVIGATION SENSOR FUSION TIME OFFSET CALIBRATION

LATTE: Accelerating LiDAR Point Cloud Annotation via Sensor Fusion, One-Click Annotation, and Tracking

19 Apr 2019bernwang/latte

2) One-click annotation: Instead of drawing 3D bounding boxes or point-wise labels, we simplify the annotation to just one click on the target object, and automatically generate the bounding box for the target.

AUTONOMOUS VEHICLES SENSOR FUSION

The ApolloScape Open Dataset for Autonomous Driving and its Application

16 Mar 2018ApolloScapeAuto/dataset-api

In this paper, we provide a sensor fusion scheme integrating camera videos, consumer-grade motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robust self-localization and semantic segmentation for autonomous driving.

AUTONOMOUS DRIVING INSTANCE SEGMENTATION MULTI-TASK LEARNING SEMANTIC SEGMENTATION SENSOR FUSION

Improvements to Target-Based 3D LiDAR to Camera Calibration

7 Oct 2019UMich-BipedLab/extrinsic_lidar_camera_calibration

The homogeneous transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM.

POSE ESTIMATION QUANTIZATION SENSOR FUSION

LiDARTag: A Real-Time Fiducial Tag using Point Clouds

23 Aug 2019UMich-BipedLab/extrinsic_lidar_camera_calibration

Image-based fiducial markers are widely used in robotics and computer vision problems such as object tracking in cluttered or textureless environments, camera (and multi-sensor) calibration tasks, or vision-based simultaneous localization and mapping (SLAM).

MOTION CAPTURE OBJECT TRACKING SENSOR FUSION SIMULTANEOUS LOCALIZATION AND MAPPING

Distributed Deep Neural Networks over the Cloud, the Edge and End Devices

6 Sep 2017kunglab/ddnn

In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.

OBJECT RECOGNITION SENSOR FUSION

MonoLayout: Amodal scene layout from a single image

19 Feb 2020hbutsuak95/monolayout

We dub this problem amodal scene layout estimation, which involves "hallucinating" scene layout for even parts of the world that are occluded in the image.

AMODAL LAYOUT ESTIMATION SENSOR FUSION

DeLS-3D: Deep Localization and Segmentation with a 3D Semantic Map

CVPR 2018 pengwangucla/DeLS-3D

The uniqueness of our design is a sensor fusion scheme which integrates camera videos, motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robustness and efficiency of the system.

AUTONOMOUS DRIVING POSE ESTIMATION SCENE PARSING SENSOR FUSION