The fundamental objective of mobile Robot Navigation is to arrive at a goal position without collision. The mobile robot is supposed to be aware of obstacles and move freely in different working scenarios.
Developing visual perception models for active agents and sensorimotor control are cumbersome to be done in the physical world, as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly.
We present a novel mapping framework for robot navigation which features a multi-level querying system capable to obtain rapidly representations as diverse as a 3D voxel grid, a 2. 5D height map and a 2D occupancy grid.
We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework.
To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based.
The problem of 3D layout recovery in indoor scenes has been a core research topic for over a decade.
In this paper we present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
As an important technology in 3D mapping, autonomous driving, and robot navigation, LiDAR odometry is still a challenging task.