no code implementations • 15 Mar 2024 • Carmelo Sferrazza, Dun-Ming Huang, Xingyu Lin, Youngwoon Lee, Pieter Abbeel
Humanoid robots hold great promise in assisting humans in diverse environments and tasks, due to their flexibility and adaptability leveraging human-like morphology.
no code implementations • 2 Nov 2023 • Carmelo Sferrazza, Younggyo Seo, Hao liu, Youngwoon Lee, Pieter Abbeel
For tasks requiring object manipulation, we seamlessly and effectively exploit the complementarity of our senses of vision and touch.
no code implementations • 2 Nov 2023 • Vint Lee, Pieter Abbeel, Youngwoon Lee
Model-based reinforcement learning (MBRL) has gained much attention for its ability to learn complex behaviors in a sample-efficient way: planning actions by generating imaginary trajectories with predicted rewards.
no code implementations • 31 Aug 2023 • Amber Xie, Youngwoon Lee, Pieter Abbeel, Stephen James
Contact is at the core of robotic manipulation.
1 code implementation • 22 May 2023 • Minho Heo, Youngwoon Lee, Doohyun Lee, Joseph J. Lim
We benchmark the performance of offline RL and IL algorithms on our assembly tasks and demonstrate the need to improve such algorithms to be able to solve our tasks in the real world, providing ample opportunities for future research.
3 code implementations • 10 Feb 2023 • Seohong Park, Kimin Lee, Youngwoon Lee, Pieter Abbeel
One of the key capabilities of intelligent agents is the ability to discover useful skills without external supervision.
no code implementations • 9 Dec 2022 • Shivin Dass, Karl Pertsch, Hejia Zhang, Youngwoon Lee, Joseph J. Lim, Stefanos Nikolaidis
Large-scale data is an essential component of machine learning as demonstrated in recent advances in natural language processing and computer vision research.
no code implementations • 15 Jul 2022 • Lucy Xiaoyang Shi, Joseph J. Lim, Youngwoon Lee
From this intuition, we propose a Skill-based Model-based RL framework (SkiMo) that enables planning in the skill space using a skill dynamics model, which directly predicts the skill outcomes, rather than predicting all small details in the intermediate states, step by step.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • NeurIPS 2021 • Youngwoon Lee, Andrew Szot, Shao-Hua Sun, Joseph J. Lim
Task progress is intuitive and readily available task information that can guide an agent closer to the desired goal.
no code implementations • 15 Nov 2021 • Youngwoon Lee, Joseph J. Lim, Anima Anandkumar, Yuke Zhu
However, these approaches require larger state distributions to be covered as more policies are sequenced, and thus are limited to short skill sequences.
1 code implementation • 11 Nov 2021 • I-Chun Arthur Liu, Shagun Uppal, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert, Youngwoon Lee
Learning complex manipulation tasks in realistic, obstructed environments is a challenging problem due to hard exploration in the presence of obstacles and high-dimensional visual observations.
no code implementations • ICLR Workshop SSL-RL 2021 • Karl Pertsch, Youngwoon Lee, Yue Wu, Joseph J. Lim
Prior approaches for demonstration-guided RL treat every new task as an independent learning problem and attempt to follow the provided demonstrations step-by-step, akin to a human trying to imitate a completely unseen behavior by following the demonstrator's exact muscle movements.
1 code implementation • 1 Jul 2021 • Grace Zhang, Linghan Zhong, Youngwoon Lee, Joseph J. Lim
In this paper, we propose a novel policy transfer method with iterative "environment grounding", IDAPT, that alternates between (1) directly minimizing both visual and dynamics domain gaps by grounding the source environment in the target environment domains, and (2) training a policy on the grounded source environment.
no code implementations • 1 Jan 2021 • Andrew Szot, Youngwoon Lee, Shao-Hua Sun, Joseph J Lim
Humans can effectively learn to estimate how close they are to completing a desired task simply by watching others fulfill the task.
2 code implementations • 22 Oct 2020 • Karl Pertsch, Youngwoon Lee, Joseph J. Lim
We validate our approach, SPiRL (Skill-Prior RL), on complex navigation and robotic manipulation tasks and show that learned skill priors are essential for effective skill transfer from rich datasets.
no code implementations • 22 Oct 2020 • Jun Yamada, Youngwoon Lee, Gautam Salhotra, Karl Pertsch, Max Pflueger, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert
In contrast, motion planners use explicit models of the agent and environment to plan collision-free paths to faraway goals, but suffer from inaccurate models in tasks that require contacts with the environment.
1 code implementation • ICLR 2020 • Youngwoon Lee, Jingyun Yang, Joseph J. Lim
When mastering a complex manipulation task, humans often decompose the task into sub-skills of their body parts, practice the sub-skills independently, and then execute the sub-skills together.
no code implementations • 16 Dec 2019 • Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Joseph J. Lim
Learning from demonstrations is a useful way to transfer a skill from one agent to another.
1 code implementation • 17 Nov 2019 • Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Alex Yin, Joseph J. Lim
The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks.