Robot Navigation
130 papers with code • 4 benchmarks • 14 datasets
The fundamental objective of mobile Robot Navigation is to arrive at a goal position without collision. The mobile robot is supposed to be aware of obstacles and move freely in different working scenarios.
Libraries
Use these libraries to find Robot Navigation models and implementationsDatasets
Latest papers
Context-Aware Entity Grounding with Open-Vocabulary 3D Scene Graphs
We present an Open-Vocabulary 3D Scene Graph (OVSG), a formal framework for grounding a variety of entities, such as object instances, agents, and regions, with free-form text-based queries.
A Study on Learning Social Robot Navigation with Multimodal Perception
Autonomous mobile robots need to perceive the environments with their onboard sensors (e. g., LiDARs and RGB cameras) and then make appropriate navigation decisions.
VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline Reinforcement Learning
We present VAPOR, a novel method for autonomous legged robot navigation in unstructured, densely vegetated outdoor environments using offline Reinforcement Learning (RL).
Improving Generalization in Reinforcement Learning Training Regimes for Social Robot Navigation
We propose a method to improve the generalization performance of RL social navigation methods using curriculum learning.
Calibrating Panoramic Depth Estimation for Practical Localization and Mapping
While panoramic images can easily capture the surrounding context from commodity devices, the estimated depth shares the limitations of conventional image-based depth estimation; the performance deteriorates under large domain shifts and the absolute values are still ambiguous to infer from 2D observations.
Integrating LLMs and Decision Transformers for Language Grounded Generative Quality-Diversity
Quality-Diversity is a branch of stochastic optimization that is often applied to problems from the Reinforcement Learning and control domains in order to construct repertoires of well-performing policies/skills that exhibit diversity with respect to a behavior space.
Point Anywhere: Directed Object Estimation from Omnidirectional Images
One of the intuitive instruction methods in robot navigation is a pointing gesture.
Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches
We introduce flying adversarial patches, where multiple images are mounted on at least one other flying robot and therefore can be placed anywhere in the field of view of a victim multirotor.
Quantitative Metrics for Benchmarking Human-Aware Robot Navigation
Using the SRPB integrated with the TIAGo robot, we assessed the robot’s behaviour operating with traditional and human-aware trajectory planners in simulated and real-world environments.
Real-time Vision-based Navigation for a Robot in an Indoor Environment
The findings contribute to the advancement of indoor robot navigation, showcasing the potential of vision-based techniques for real-time, autonomous navigation.