Search Results for author: Carl Qi

Found 5 papers, 0 papers with code

Robot Air Hockey: A Manipulation Testbed for Robot Learning with Reinforcement Learning

no code implementations6 May 2024 Caleb Chuck, Carl Qi, Michael J. Munje, Shuozhe Li, Max Rudolph, Chang Shi, Siddhant Agarwal, Harshit Sikchi, Abhinav Peri, Sarthak Dayal, Evan Kuo, Kavan Mehta, Anthony Wang, Peter Stone, Amy Zhang, Scott Niekum

Reinforcement Learning is a promising tool for learning complex policies even in fast-moving and object-interactive domains where human teleoperation or hard-coded policies might fail.

Learning Generalizable Tool-use Skills through Trajectory Generation

no code implementations29 Sep 2023 Carl Qi, Yilin Wu, Lifan Yu, Haoyue Liu, Bowen Jiang, Xingyu Lin, David Held

We propose to learn a generative model of the tool-use trajectories as a sequence of tool point clouds, which generalizes to different tool shapes.

Deformable Object Manipulation

Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation

no code implementations27 Oct 2022 Xingyu Lin, Carl Qi, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held

Effective planning of long-horizon deformable object manipulation requires suitable abstractions at both the spatial and temporal levels.

Deformable Object Manipulation

Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning

no code implementations7 Apr 2022 Carl Qi, Pieter Abbeel, Aditya Grover

The goal of imitation learning is to mimic expert behavior from demonstrations, without access to an explicit reward signal.

Imitation Learning reinforcement-learning +2

Robust Imitation via Decision-Time Planning

no code implementations1 Jan 2021 Carl Qi, Pieter Abbeel, Aditya Grover

The goal of imitation learning is to mimic expert behavior from demonstrations, without access to an explicit reward signal.

Imitation Learning reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.