Search Results for author: Laura Smith

Found 14 papers, 7 papers with code

Adapt On-the-Go: Behavior Modulation for Single-Life Robot Deployment

no code implementations2 Nov 2023 Annie S. Chen, Govind Chada, Laura Smith, Archit Sharma, Zipeng Fu, Sergey Levine, Chelsea Finn

We provide theoretical analysis of our selection mechanism and demonstrate that ROAM enables a robot to adapt rapidly to changes in dynamics both in simulation and on a real Go1 quadruped, even successfully moving forward with roller skates on its feet.

Grow Your Limits: Continuous Improvement with Real-World RL for Robotic Locomotion

no code implementations26 Oct 2023 Laura Smith, YunHao Cao, Sergey Levine

Deep reinforcement learning (RL) can enable robots to autonomously acquire complex behaviors, such as legged locomotion.

Efficient Exploration Reinforcement Learning (RL)

Learning and Adapting Agile Locomotion Skills by Transferring Experience

no code implementations19 Apr 2023 Laura Smith, J. Chase Kew, Tianyu Li, Linda Luu, Xue Bin Peng, Sehoon Ha, Jie Tan, Sergey Levine

Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running.

Reinforcement Learning (RL)

A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning

1 code implementation16 Aug 2022 Laura Smith, Ilya Kostrikov, Sergey Levine

Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments that do not require domain knowledge.

reinforcement-learning Reinforcement Learning (RL)

B-Pref: Benchmarking Preference-Based Reinforcement Learning

1 code implementation4 Nov 2021 Kimin Lee, Laura Smith, Anca Dragan, Pieter Abbeel

However, it is difficult to quantify the progress in preference-based RL due to the lack of a commonly adopted benchmark.

Benchmarking reinforcement-learning +1

Offline Meta-Reinforcement Learning with Online Self-Supervision

1 code implementation8 Jul 2021 Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine

If we can meta-train on offline data, then we can reuse the same static dataset, labeled once with rewards for different tasks, to meta-train policies that adapt to a variety of new tasks at meta-test time.

Meta Reinforcement Learning Offline RL +2

PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training

2 code implementations9 Jun 2021 Kimin Lee, Laura Smith, Pieter Abbeel

We also show that our method is able to utilize real-time human feedback to effectively prevent reward exploitation and learn new behaviors that are difficult to specify with standard reward functions.

reinforcement-learning Reinforcement Learning (RL) +1

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

no code implementations10 Dec 2019 Laura Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine

In this paper, we study how these challenges can be alleviated with an automated robotic learning framework, in which multi-stage tasks are defined simply by providing videos of a human demonstrator and then learned autonomously by the robot from raw image observations.

Reinforcement Learning (RL) Translation

Identifying Locus of Control in Social Media Language

no code implementations EMNLP 2018 Masoud Rouhizadeh, Kokil Jaidka, Laura Smith, H. Andrew Schwartz, Anneke Buffone, Lyle Ungar

Individuals express their locus of control, or {``}control{''}, in their language when they identify whether or not they are in control of their circumstances.

SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning

1 code implementation ICLR 2019 Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J. Johnson, Sergey Levine

Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images.

Model-based Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.