no code implementations • 25 Mar 2024 • Ishika Singh, David Traum, Jesse Thomason
We demonstrate that LLM-based goal decomposition leads to faster planning times than solving multi-agent PDDL problems directly while simultaneously achieving fewer plan execution steps than a single agent plan alone and preserving execution success.
1 code implementation • 13 Feb 2024 • Wilbert Pumacay, Ishika Singh, Jiafei Duan, Ranjay Krishna, Jesse Thomason, Dieter Fox
To realize effective large-scale, real-world robotic applications, we must evaluate how well our robot policies adapt to changes in environmental conditions.
Ranked #1 on Robot Manipulation Generalization on The COLOSSEUM
no code implementations • 28 Nov 2023 • Wang Zhu, Ishika Singh, Yuan Huang, Robin Jia, Jesse Thomason
Data augmentation via back-translation is common when pretraining Vision-and-Language Navigation (VLN) models, even though the generated instructions are noisy.
no code implementations • 22 Sep 2022 • Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg
To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information.
1 code implementation • 18 Jul 2021 • Ishika Singh, Gargi Singh, Ashutosh Modi
Given the sample-inefficiency of RL approaches, it is inefficient to learn rich enough textual representations to be able to understand and reason using the textual observation in such a complicated game environment setting.
1 code implementation • COLING 2020 • Ishika Singh, Ahsan Barkati, Tushar Goswamy, Ashutosh Modi
The model gives a user the flexibility to control the category and intensity of emotion as well as the topic of the generated text.
1 code implementation • 16 Jun 2020 • Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin, Pengtao Xie
To address this problem, we propose federated neural architecture search (FNAS), where different parties collectively search for a differentiable architecture by exchanging gradients of architecture variables without exposing their data to other parties.