Simultaneous Localization and Mapping
134 papers with code • 0 benchmarks • 18 datasets
Simultaneous localization and mapping (SLAM) is the task of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.
( Image credit: ORB-SLAM2 )
Benchmarks
These leaderboards are used to track progress in Simultaneous Localization and Mapping
Libraries
Use these libraries to find Simultaneous Localization and Mapping models and implementationsDatasets
Latest papers
SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM
Dense simultaneous localization and mapping (SLAM) is crucial for robotics and augmented reality applications.
Continuous Pose for Monocular Cameras in Neural Implicit Representation
In this paper, we showcase the effectiveness of optimizing monocular camera poses as a continuous function of time.
Monocular visual simultaneous localization and mapping:(r) evolution from geometry to deep learning-based pipelines
With the rise of deep learning, there is a fundamental change in visual simultaneous localization and mapping (SLAM) algorithms toward developing different modules trained as end-to-end pipelines.
GO-SLAM: Global Optimization for Consistent 3D Instant Reconstruction
Neural implicit representations have recently demonstrated compelling results on dense Simultaneous Localization And Mapping (SLAM) but suffer from the accumulation of errors in camera tracking and distortion in the reconstruction.
NTU4DRadLM: 4D Radar-centric Multi-Modal Dataset for Localization and Mapping
5) Considered both middle- and large- scale outdoor environments, i. e., the 6 trajectories range from 246m to 6. 95km.
UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM).
iSLAM: Imperative SLAM
Simultaneous Localization and Mapping (SLAM) stands as one of the critical challenges in robot navigation.
Volume-DROID: A Real-Time Implementation of Volumetric Mapping with DROID-SLAM
Volume-DROID takes camera images (monocular or stereo) or frames from a video as input and combines DROID-SLAM, point cloud registration, an off-the-shelf semantic segmentation network, and Convolutional Bayesian Kernel Inference (ConvBKI) to generate a 3D semantic map of the environment and provide accurate localization for the robot.
Rotation Synchronization via Deep Matrix Factorization
In this paper we address the rotation synchronization problem, where the objective is to recover absolute rotations starting from pairwise ones, where the unknowns and the measures are represented as nodes and edges of a graph, respectively.
Event-based Simultaneous Localization and Mapping: A Comprehensive Survey
This paper presents a timely and comprehensive review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.