Search Results for author: Ishika Singh

Found 7 papers, 4 papers with code

TwoStep: Multi-agent Task Planning using Classical Planners and Large Language Models

no code implementations25 Mar 2024 Ishika Singh, David Traum, Jesse Thomason

We demonstrate that LLM-based goal decomposition leads to faster planning times than solving multi-agent PDDL problems directly while simultaneously achieving fewer plan execution steps than a single agent plan alone and preserving execution success.

THE COLOSSEUM: A Benchmark for Evaluating Generalization for Robotic Manipulation

1 code implementation13 Feb 2024 Wilbert Pumacay, Ishika Singh, Jiafei Duan, Ranjay Krishna, Jesse Thomason, Dieter Fox

To realize effective large-scale, real-world robotic applications, we must evaluate how well our robot policies adapt to changes in environmental conditions.

Robot Manipulation Generalization

Does VLN Pretraining Work with Nonsensical or Irrelevant Instructions?

no code implementations28 Nov 2023 Wang Zhu, Ishika Singh, Yuan Huang, Robin Jia, Jesse Thomason

Data augmentation via back-translation is common when pretraining Vision-and-Language Navigation (VLN) models, even though the generated instructions are noisy.

Data Augmentation Translation +1

ProgPrompt: Generating Situated Robot Task Plans using Large Language Models

no code implementations22 Sep 2022 Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg

To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information.

Pre-trained Language Models as Prior Knowledge for Playing Text-based Games

1 code implementation18 Jul 2021 Ishika Singh, Gargi Singh, Ashutosh Modi

Given the sample-inefficiency of RL approaches, it is inefficient to learn rich enough textual representations to be able to understand and reason using the textual observation in such a complicated game environment setting.

text-based games

Adapting a Language Model for Controlled Affective Text Generation

1 code implementation COLING 2020 Ishika Singh, Ahsan Barkati, Tushar Goswamy, Ashutosh Modi

The model gives a user the flexibility to control the category and intensity of emotion as well as the topic of the generated text.

Language Modelling Text Generation

Differentially-private Federated Neural Architecture Search

1 code implementation16 Jun 2020 Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin, Pengtao Xie

To address this problem, we propose federated neural architecture search (FNAS), where different parties collectively search for a differentiable architecture by exchanging gradients of architecture variables without exposing their data to other parties.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.