Simultaneous Localization and Mapping

134 papers with code • 0 benchmarks • 18 datasets

Simultaneous localization and mapping (SLAM) is the task of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.

( Image credit: ORB-SLAM2 )

Libraries

Use these libraries to find Simultaneous Localization and Mapping models and implementations

Most implemented papers

PyRobot: An Open-source Robotics Framework for Research and Benchmarking

facebookresearch/pyrobot 19 Jun 2019

This paper introduces PyRobot, an open-source robotics framework for research and benchmarking.

A Fast and Robust Place Recognition Approach for Stereo Visual Odometry Using LiDAR Descriptors

IRVLab/so_dso_place_recognition 16 Sep 2019

Place recognition is a core component of Simultaneous Localization and Mapping (SLAM) algorithms.

Fast and Incremental Loop Closure Detection with Deep Features and Proximity Graphs

anshan-ar/fild 29 Sep 2020

In recent years, the robotics community has extensively examined methods concerning the place recognition task within the scope of simultaneous localization and mapping applications. This article proposes an appearance-based loop closure detection pipeline named ``FILD++" (Fast and Incremental Loop closure Detection). First, the system is fed by consecutive images and, via passing them twice through a single convolutional neural network, global and local deep features are extracted. Subsequently, a hierarchical navigable small-world graph incrementally constructs a visual database representing the robot's traversed path based on the computed global features. Finally, a query image, grabbed each time step, is set to retrieve similar locations on the traversed route. An image-to-image pairing follows, which exploits local features to evaluate the spatial information.

Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration

gogojjh/M-LOAM 27 Oct 2020

This paper proposes a system to achieve robust and simultaneous extrinsic calibration, odometry, and mapping for multiple LiDARs.

Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor Navigation

utiasasrl/crystal_ball_nav 10 Dec 2020

We provide insights into our network predictions and show that our approach can also improve the performances of common localization techniques.

CholecSeg8k: A Semantic Segmentation Dataset for Laparoscopic Cholecystectomy Based on Cholec80

camma-public/ssg-vqa 23 Dec 2020

Each of these images is annotated at pixel-level for thirteen classes, which are commonly founded in laparoscopic cholecystectomy surgery.

Unsupervised Scale-consistent Depth Learning from Video

JiawangBian/sc_depth_pl 25 May 2021

We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training and enables the scale-consistent prediction at inference time.

STUN: Self-Teaching Uncertainty Estimation for Place Recognition

ramdrop/stun 3 Mar 2022

Then, supervised by the pretrained teacher net, a student net with an additional variance branch is trained to finetune the embedding priors and estimate the uncertainty sample by sample.

BoW3D: Bag of Words for Real-Time Loop Closing in 3D LiDAR SLAM

yungecui/bow3d 15 Aug 2022

To address this limitation, we present a novel Bag of Words for real-time loop closing in 3D LiDAR SLAM, called BoW3D.

Point-SLAM: Dense Neural Point Cloud-based SLAM

eriksandstroem/point-slam ICCV 2023

We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input which anchors the features of a neural scene representation in a point cloud that is iteratively generated in an input-dependent data-driven manner.