Search Results for author: Oleh Rybkin

Found 19 papers, 9 papers with code

METRA: Scalable Unsupervised RL with Metric-Aware Abstraction

1 code implementation13 Oct 2023 Seohong Park, Oleh Rybkin, Sergey Levine

Through our experiments in five locomotion and manipulation environments, we demonstrate that METRA can discover a variety of useful behaviors even in complex, pixel-based environments, being the first unsupervised RL method that discovers diverse locomotion behaviors in pixel-based Quadruped and Humanoid.

Reinforcement Learning (RL) Unsupervised Pre-training +1

Planning Goals for Exploration

1 code implementation23 Mar 2023 Edward S. Hu, Richard Chang, Oleh Rybkin, Dinesh Jayaraman

We address this question within the goal-conditioned reinforcement learning paradigm, by identifying how the agent should set its goals at training time to maximize exploration.

Discovering and Achieving Goals via World Models

2 code implementations NeurIPS 2021 Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak

How can artificial agents learn to solve many diverse tasks in complex visual environments in the absence of any supervision?

Transferable Visual Control Policies Through Robot-Awareness

no code implementations ICLR 2022 Edward S. Hu, Kun Huang, Oleh Rybkin, Dinesh Jayaraman

Training visual control policies from scratch on a new robot typically requires generating large amounts of robot-specific data.

Know Thyself: Transferable Visual Control Policies Through Robot-Awareness

1 code implementation19 Jul 2021 Edward S. Hu, Kun Huang, Oleh Rybkin, Dinesh Jayaraman

Training visual control policies from scratch on a new robot typically requires generating large amounts of robot-specific data.

Model-based Reinforcement Learning Transfer Learning +1

Model-Based Reinforcement Learning via Latent-Space Collocation

1 code implementation24 Jun 2021 Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine

The resulting latent collocation method (LatCo) optimizes trajectories of latent states, which improves over previously proposed shooting methods for visual model-based RL on tasks with sparse rewards and long-term goals.

Model-based Reinforcement Learning reinforcement-learning +1

Discovering and Achieving Goals with World Models

no code implementations ICML Workshop URL 2021 Russell Mendonca, Oleh Rybkin, Kostas Daniilidis, Danijar Hafner, Deepak Pathak

How can an artificial agent learn to solve a wide range of tasks in a complex visual environment in the absence of external supervision?

Simple and Effective VAE Training with Calibrated Decoders

1 code implementation23 Jun 2020 Oleh Rybkin, Kostas Daniilidis, Sergey Levine

We perform the first comprehensive comparative analysis of calibrated decoder and provide recommendations for simple and effective VAE training.

Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors

1 code implementation NeurIPS 2020 Karl Pertsch, Oleh Rybkin, Frederik Ebert, Chelsea Finn, Dinesh Jayaraman, Sergey Levine

In this work we propose a framework for visual prediction and planning that is able to overcome both of these limitations.

Planning to Explore via Self-Supervised World Models

4 code implementations12 May 2020 Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, Deepak Pathak

Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge.

reinforcement-learning Reinforcement Learning (RL)

Learning Predictive Models From Observation and Interaction

no code implementations ECCV 2020 Karl Schmeckpeper, Annie Xie, Oleh Rybkin, Stephen Tian, Kostas Daniilidis, Sergey Levine, Chelsea Finn

Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works, and then use this learned model to plan coordinated sequences of actions to bring about desired outcomes.

Goal-Conditioned Video Prediction

no code implementations25 Sep 2019 Oleh Rybkin, Karl Pertsch, Frederik Ebert, Dinesh Jayaraman, Chelsea Finn, Sergey Levine

Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video.

Imitation Learning Video Generation +1

Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction

no code implementations25 Sep 2019 Karl Pertsch, Oleh Rybkin, Jingyun Yang, Konstantinos G. Derpanis, Kostas Daniilidis, Joseph J. Lim, Andrew Jaegle

To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed.

Temporal Sequences

Learning what you can do before doing anything

no code implementations ICLR 2019 Oleh Rybkin, Karl Pertsch, Konstantinos G. Derpanis, Kostas Daniilidis, Andrew Jaegle

We introduce a loss term that encourages the network to capture the composability of visual sequences and show that it leads to representations that disentangle the structure of actions.

Video Prediction

Predicting the Future with Transformational States

no code implementations26 Mar 2018 Andrew Jaegle, Oleh Rybkin, Konstantinos G. Derpanis, Kostas Daniilidis

We couple this latent state with a recurrent neural network (RNN) core that predicts future frames by transforming past states into future states by applying the accumulated state transformation with a learned operator.

Cannot find the paper you are looking for? You can Submit a new open access paper.