Robot Navigation
128 papers with code • 4 benchmarks • 14 datasets
The fundamental objective of mobile Robot Navigation is to arrive at a goal position without collision. The mobile robot is supposed to be aware of obstacles and move freely in different working scenarios.
Libraries
Use these libraries to find Robot Navigation models and implementationsDatasets
Latest papers with no code
Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation
Recent open-vocabulary robot mapping methods enrich dense geometric maps with pre-trained visual-language features.
SRLM: Human-in-Loop Interactive Social Robot Navigation with Large Language Model and Deep Reinforcement Learning
An interactive social robotic assistant must provide services in complex and crowded spaces while adapting its behavior based on real-time human language commands or feedback.
Belief Aided Navigation using Bayesian Reinforcement Learning for Avoiding Humans in Blind Spots
Recent research on mobile robot navigation has focused on socially aware navigation in crowded environments.
NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning
Navigating a nonholonomic robot in a cluttered environment requires extremely accurate perception and locomotion for collision avoidance.
Single-image camera calibration with model-free distortion correction
Camera calibration is a process of paramount importance in computer vision applications that require accurate quantitative measurements.
UniMODE: Unified Monocular 3D Object Detection
To address these challenges, we build a detector based on the bird's-eye-view (BEV) detection paradigm, where the explicit feature projection is beneficial to addressing the geometry learning ambiguity when employing multiple scenarios of data to train detectors.
BioDrone: A Bionic Drone-based Single Object Tracking Benchmark for Robust Vision
These challenges are especially manifested in videos captured by unmanned aerial vehicles (UAV), where the target is usually far away from the camera and often with significant motion relative to the camera.
Vision-Language Models Provide Promptable Representations for Reinforcement Learning
We find that our policies trained on embeddings extracted from general-purpose VLMs outperform equivalent policies trained on generic, non-promptable image embeddings.
Beyond Text: Improving LLM's Decision Making for Robot Navigation via Vocal Cues
We present "Beyond Text"; an approach that improves LLM decision-making by integrating audio transcription along with a subsection of these features, which focus on the affect and more relevant in human-robot conversations.
Acoustic Local Positioning With Encoded Emission Beacons
In airborne system applications, acoustic positioning can be based on using opportunistic signals or sounds produced by the person or object to be located (e. g., noise from appliances or the speech from a speaker) or from encoded emission beacons (or anchors) specifically designed for this purpose.