no code implementations • 25 Mar 2024 • Saad Abdul Ghani, Zizhao Wang, Peter Stone, Xuesu Xiao
In our new Dynamic Learning from Learned Hallucination (Dyna-LfLH), we design and learn a novel latent distribution and sample dynamic obstacles from it, so the generated training data can be used to learn a motion planner to navigate in dynamic environments.
no code implementations • 12 Mar 2024 • Mohammad Nazeri, Junzhe Wang, Amirreza Payandeh, Xuesu Xiao
However, most robotic visual navigation methods rely on deep learning models pre-trained on vision tasks, which prioritize salient objects -- not necessarily relevant to navigation and potentially misleading.
no code implementations • 6 Mar 2024 • Zifan Xu, Amir Hossain Raj, Xuesu Xiao, Peter Stone
To address the inefficiency of tracking distant navigation goals, we introduce a hierarchical locomotion controller that combines a classical planner tasked with planning waypoints to reach a faraway global goal location, and an RL-based policy trained to follow these waypoints by generating low-level motion commands.
no code implementations • 23 Jan 2024 • Zizhao Wang, Caroline Wang, Xuesu Xiao, Yuke Zhu, Peter Stone
Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications.
1 code implementation • 22 Sep 2023 • Bhabaranjan Panigrahi, Amir Hossain Raj, Mohammad Nazeri, Xuesu Xiao
Autonomous mobile robots need to perceive the environments with their onboard sensors (e. g., LiDARs and RGB cameras) and then make appropriate navigation decisions.
1 code implementation • 18 Aug 2023 • Amirreza Payandeh, Dan Pluth, Jordan Hosier, Xuesu Xiao, Vijay K. Gurbani
Then, it evaluates the debater's performance in logical reasoning by contrasting the scenario where the persuader employs logical fallacies against one where logical reasoning is used.
no code implementations • 29 Jun 2023 • Anthony Francis, Claudia Pérez-D'Arpino, Chengshu Li, Fei Xia, Alexandre Alahi, Rachid Alami, Aniket Bera, Abhijat Biswas, Joydeep Biswas, Rohan Chandra, Hao-Tien Lewis Chiang, Michael Everett, Sehoon Ha, Justin Hart, Jonathan P. How, Haresh Karnan, Tsang-Wei Edward Lee, Luis J. Manso, Reuth Mirksy, Sören Pirk, Phani Teja Singamaneni, Peter Stone, Ada V. Taylor, Peter Trautman, Nathan Tsoi, Marynel Vázquez, Xuesu Xiao, Peng Xu, Naoki Yokoyama, Alexander Toshev, Roberto Martín-Martín
A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation.
1 code implementation • 10 Oct 2022 • Zifan Xu, Bo Liu, Xuesu Xiao, Anirudh Nair, Peter Stone
Deep reinforcement learning (RL) has brought many successes for autonomous robot navigation.
no code implementations • 22 Sep 2022 • Xuesu Xiao, Tingnan Zhang, Krzysztof Choromanski, Edward Lee, Anthony Francis, Jake Varley, Stephen Tu, Sumeet Singh, Peng Xu, Fei Xia, Sven Mikael Persson, Dmitry Kalashnikov, Leila Takayama, Roy Frostig, Jie Tan, Carolina Parada, Vikas Sindhwani
Despite decades of research, existing navigation systems still face real-world challenges when deployed in the wild, e. g., in cluttered home environments or in human-occupied public spaces.
1 code implementation • 27 Jun 2022 • Zizhao Wang, Xuesu Xiao, Zifan Xu, Yuke Zhu, Peter Stone
Learning dynamics models accurately is an important goal for Model-Based Reinforcement Learning (MBRL), but most MBRL methods learn a dense dynamics model which is vulnerable to spurious correlations and therefore generalizes poorly to unseen states.
no code implementations • 16 Jun 2022 • Pranav Atreya, Haresh Karnan, Kavan Singh Sikand, Xuesu Xiao, Sadegh Rabiee, Joydeep Biswas
However, the types of control problems these approaches can be applied to are limited only to that of following pre-computed kinodynamically feasible trajectories.
no code implementations • 30 Mar 2022 • Haresh Karnan, Kavan Singh Sikand, Pranav Atreya, Sadegh Rabiee, Xuesu Xiao, Garrett Warnell, Peter Stone, Joydeep Biswas
In this paper, we hypothesize that to enable accurate high-speed off-road navigation using a learned IKD model, in addition to inertial information from the past, one must also anticipate the kinodynamic interactions of the vehicle with the terrain in the future.
no code implementations • 28 Mar 2022 • Haresh Karnan, Anirudh Nair, Xuesu Xiao, Garrett Warnell, Soeren Pirk, Alexander Toshev, Justin Hart, Joydeep Biswas, Peter Stone
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a 'socially compliant' manner in the presence of other intelligent agents such as humans.
no code implementations • 18 Sep 2021 • Kavan Singh Sikand, Sadegh Rabiee, Adam Uccello, Xuesu Xiao, Garrett Warnell, Joydeep Biswas
We introduce Visual Representation Learning for Preference-Aware Path Planning (VRL-PAP), an alternative approach that overcomes all three limitations: VRL-PAP leverages unlabeled human demonstrations of navigation to autonomously generate triplets for learning visual representations of terrain that are viewpoint invariant and encode terrain types in a continuous representation space.
no code implementations • 23 Jun 2021 • Reuth Mirsky, Xuesu Xiao, Justin Hart, Peter Stone
This survey aims to bridge this gap by introducing such a common language, using it to survey existing work, and highlighting open problems.
no code implementations • 19 May 2021 • Haresh Karnan, Garrett Warnell, Xuesu Xiao, Peter Stone
Is imitation learning for vision based autonomous navigation even possible in such scenarios?
no code implementations • 31 Mar 2020 • Xuesu Xiao, Bo Liu, Garrett Warnell, Jonathan Fink, Peter Stone
Existing autonomous robot navigation systems allow robots to move from one point to another in a collision-free manner.
no code implementations • 15 Jan 2020 • Xuesu Xiao, Jan Dufek, Robin R. Murphy
In this paper, an autonomous tethered Unmanned Aerial Vehicle (UAV) is developed into a visual assistant in a marsupial co-robots team, collaborating with a tele-operated Unmanned Ground Vehicle (UGV) for robot operations in unstructured or confined environments.
no code implementations • 29 Mar 2019 • Xuesu Xiao, Jan Dufek, Robin R. Murphy
This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments.
Robotics
no code implementations • 7 Mar 2019 • Xuesu Xiao, Jan Dufek, Robin Murphy
Without manual assignment of the negative impact to the planner caused by risk, this planner takes in a pre-established viewpoint quality map and plans target location and path leading to it simultaneously, in order to maximize overall reward along the entire path while minimizing risk.