Simultaneous Localization and Mapping
134 papers with code • 0 benchmarks • 18 datasets
Simultaneous localization and mapping (SLAM) is the task of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.
( Image credit: ORB-SLAM2 )
Benchmarks
These leaderboards are used to track progress in Simultaneous Localization and Mapping
Libraries
Use these libraries to find Simultaneous Localization and Mapping models and implementationsDatasets
Most implemented papers
PyRobot: An Open-source Robotics Framework for Research and Benchmarking
This paper introduces PyRobot, an open-source robotics framework for research and benchmarking.
A Fast and Robust Place Recognition Approach for Stereo Visual Odometry Using LiDAR Descriptors
Place recognition is a core component of Simultaneous Localization and Mapping (SLAM) algorithms.
Fast and Incremental Loop Closure Detection with Deep Features and Proximity Graphs
In recent years, the robotics community has extensively examined methods concerning the place recognition task within the scope of simultaneous localization and mapping applications. This article proposes an appearance-based loop closure detection pipeline named ``FILD++" (Fast and Incremental Loop closure Detection). First, the system is fed by consecutive images and, via passing them twice through a single convolutional neural network, global and local deep features are extracted. Subsequently, a hierarchical navigable small-world graph incrementally constructs a visual database representing the robot's traversed path based on the computed global features. Finally, a query image, grabbed each time step, is set to retrieve similar locations on the traversed route. An image-to-image pairing follows, which exploits local features to evaluate the spatial information.
Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration
This paper proposes a system to achieve robust and simultaneous extrinsic calibration, odometry, and mapping for multiple LiDARs.
Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor Navigation
We provide insights into our network predictions and show that our approach can also improve the performances of common localization techniques.
CholecSeg8k: A Semantic Segmentation Dataset for Laparoscopic Cholecystectomy Based on Cholec80
Each of these images is annotated at pixel-level for thirteen classes, which are commonly founded in laparoscopic cholecystectomy surgery.
Unsupervised Scale-consistent Depth Learning from Video
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training and enables the scale-consistent prediction at inference time.
STUN: Self-Teaching Uncertainty Estimation for Place Recognition
Then, supervised by the pretrained teacher net, a student net with an additional variance branch is trained to finetune the embedding priors and estimate the uncertainty sample by sample.
BoW3D: Bag of Words for Real-Time Loop Closing in 3D LiDAR SLAM
To address this limitation, we present a novel Bag of Words for real-time loop closing in 3D LiDAR SLAM, called BoW3D.
Point-SLAM: Dense Neural Point Cloud-based SLAM
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input which anchors the features of a neural scene representation in a point cloud that is iteratively generated in an input-dependent data-driven manner.