Robot Navigation

128 papers with code • 4 benchmarks • 14 datasets

The fundamental objective of mobile Robot Navigation is to arrive at a goal position without collision. The mobile robot is supposed to be aware of obstacles and move freely in different working scenarios.

Source: Learning to Navigate from Simulation via Spatial and Semantic Information Synthesis with Noise Model Embedding

Libraries

Use these libraries to find Robot Navigation models and implementations

Latest papers with no code

Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation

no code yet • 26 Mar 2024

Recent open-vocabulary robot mapping methods enrich dense geometric maps with pre-trained visual-language features.

SRLM: Human-in-Loop Interactive Social Robot Navigation with Large Language Model and Deep Reinforcement Learning

no code yet • 22 Mar 2024

An interactive social robotic assistant must provide services in complex and crowded spaces while adapting its behavior based on real-time human language commands or feedback.

Belief Aided Navigation using Bayesian Reinforcement Learning for Avoiding Humans in Blind Spots

no code yet • 15 Mar 2024

Recent research on mobile robot navigation has focused on socially aware navigation in crowded environments.

NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning

no code yet • 11 Mar 2024

Navigating a nonholonomic robot in a cluttered environment requires extremely accurate perception and locomotion for collision avoidance.

Single-image camera calibration with model-free distortion correction

no code yet • 2 Mar 2024

Camera calibration is a process of paramount importance in computer vision applications that require accurate quantitative measurements.

UniMODE: Unified Monocular 3D Object Detection

no code yet • 28 Feb 2024

To address these challenges, we build a detector based on the bird's-eye-view (BEV) detection paradigm, where the explicit feature projection is beneficial to addressing the geometry learning ambiguity when employing multiple scenarios of data to train detectors.

BioDrone: A Bionic Drone-based Single Object Tracking Benchmark for Robust Vision

no code yet • 7 Feb 2024

These challenges are especially manifested in videos captured by unmanned aerial vehicles (UAV), where the target is usually far away from the camera and often with significant motion relative to the camera.

Vision-Language Models Provide Promptable Representations for Reinforcement Learning

no code yet • 5 Feb 2024

We find that our policies trained on embeddings extracted from general-purpose VLMs outperform equivalent policies trained on generic, non-promptable image embeddings.

Beyond Text: Improving LLM's Decision Making for Robot Navigation via Vocal Cues

no code yet • 5 Feb 2024

We present "Beyond Text"; an approach that improves LLM decision-making by integrating audio transcription along with a subsection of these features, which focus on the affect and more relevant in human-robot conversations.

Acoustic Local Positioning With Encoded Emission Beacons

no code yet • 4 Feb 2024

In airborne system applications, acoustic positioning can be based on using opportunistic signals or sounds produced by the person or object to be located (e. g., noise from appliances or the speech from a speaker) or from encoded emission beacons (or anchors) specifically designed for this purpose.