Robot Task Planning

18 papers with code • 3 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints

jhu-lcsr/costar_plan 27 Oct 2018

We show that a mild relaxation of the task and workspace constraints implicit in existing object grasping datasets can cause neural network based grasping algorithms to fail on even a simple block stacking task when executed under more realistic circumstances.

3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans

MIT-SPARK/Kimera 15 Feb 2020

Our second contribution is to provide the first fully automatic Spatial PerceptIon eNgine(SPIN) to build a DSG from visual-inertial data.

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

flowersteam/lamorel 4 Apr 2022

We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment.

You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration

wenbowen123/catgrasp 30 Jan 2022

The canonical object representation is learned solely in simulation and then used to parse a category-level, task trajectory from a single demonstration video.

Visual Robot Task Planning

jhu-lcsr/costar_plan 30 Mar 2018

In this work, we propose a neural network architecture and associated planning algorithm that (1) learns a representation of the world useful for generating prospective futures after the application of high-level actions, (2) uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and (3) uses this same representation to evaluate these actions and perform tree search to find a sequence of high-level actions in a new environment.

Task Planning with a Weighted Functional Object-Oriented Network

davidpaulius/foon_api 1 May 2019

The paper also presents a task planning algorithm for the weighted FOON to allocate manipulation action load to the robot and human to achieve optimal performance while minimizing human effort.

PackIt: A Virtual Environment for Geometric Planning

princeton-vl/PackIt ICML 2020

The ability to jointly understand the geometry of objects and plan actions for manipulating them is crucial for intelligent agents.

Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation

stepjam/ARM 31 May 2021

Despite the success of reinforcement learning methods, they have yet to have their breakthrough moment when applied to a broad range of robotic manipulation tasks.

CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

wenbowen123/catgrasp 19 Sep 2021

This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation.

Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents

huangwl18/language-planner 18 Jan 2022

However, the plans produced naively by LLMs often cannot map precisely to admissible actions.