Autonomous navigation is the task of autonomously navigating a vehicle or robot to or around a location without human guidance.
The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis.
This paper introduces a novel perception framework that has the ability to identify and track objects in autonomous vehicle's field of view.
Given a strict time budget, Bi3D can detect objects closer than a given distance in as little as a few milliseconds, or estimate depth with arbitrarily coarse quantization, with complexity linear with the number of quantization levels.
Concurrently, a second back-up algorithm, based on representations learning and resilient to illumination variations, can take control of the machine in case of a momentaneous failure of the first block.
Mechanistically, we develop new unified templates that facilitate the implementation, deployment and evaluation of a wide range of VPR techniques and datasets.
In this paper, we study a joint detection, mapping and navigation problem for a single unmanned aerial vehicle (UAV) equipped with a low complexity radar and flying in an unknown environment.
In this work, we develop accurate models for understanding spatial references in text that are also robust and interpretable.
The proposed approach deals with the motion, probabilistic safety, and online computation constraints by: (i) incrementally mapping the surroundings to build an uncertainty-aware representation of the environment, and (ii) iteratively (re)planning trajectories to goal that are kinodynamically feasible and probabilistically safe through a multi-layered sampling-based planner in the belief space.