no code implementations • 25 Jan 2024 • Haoyu Xiong, Russell Mendonca, Kenneth Shaw, Deepak Pathak
We also develop a low-cost mobile manipulation hardware platform capable of safe and autonomous online adaptation in unstructured environments with a cost of around 20, 000 USD.
no code implementations • 5 Sep 2023 • Kevin Gmelin, Shikhar Bahl, Russell Mendonca, Deepak Pathak
Agents that are aware of the separation between themselves and their environments can leverage this understanding to form effective representations of visual input.
no code implementations • 21 Aug 2023 • Russell Mendonca, Shikhar Bahl, Deepak Pathak
We propose an approach for robots to efficiently learn manipulation skills using only a handful of real-world interaction trajectories from many different settings.
no code implementations • CVPR 2023 • Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, Deepak Pathak
Utilizing internet videos of human behavior, we train a visual affordance model that estimates where and how in the scene a human is likely to interact.
no code implementations • 13 Feb 2023 • Russell Mendonca, Shikhar Bahl, Deepak Pathak
Robotic agents that operate autonomously in the real world need to continuously explore their environment and learn from the data collected, with minimal human supervision.
2 code implementations • NeurIPS 2021 • Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak
How can artificial agents learn to solve many diverse tasks in complex visual environments in the absence of any supervision?
no code implementations • ICML Workshop URL 2021 • Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak
How can an artificial agent learn to solve a wide range of tasks in a complex visual environment in the absence of external supervision?
no code implementations • 12 Jun 2020 • Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine
Our method is based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data, more easily than policies and value functions.
no code implementations • 25 Sep 2019 • Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine
Reinforcement learning algorithms can acquire policies for complex tasks automatically, however the number of samples required to learn a diverse set of skills can be prohibitively large.
no code implementations • ICLR 2019 • Rosen Kralev, Russell Mendonca, Alvin Zhang, Tianhe Yu, Abhishek Gupta, Pieter Abbeel, Sergey Levine, Chelsea Finn
Meta-reinforcement learning aims to learn fast reinforcement learning (RL) procedures that can be applied to new tasks or environments.
no code implementations • NeurIPS 2019 • Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn
Reinforcement learning (RL) algorithms have demonstrated promising results on complex tasks, yet often require impractical numbers of samples since they learn from scratch.
2 code implementations • NeurIPS 2018 • Abhishek Gupta, Russell Mendonca, Yuxuan Liu, Pieter Abbeel, Sergey Levine
Exploration is a fundamental challenge in reinforcement learning (RL).