Search Results for author: Nicklas Hansen

Found 15 papers, 11 papers with code

TD-MPC2: Scalable, Robust World Models for Continuous Control

1 code implementation25 Oct 2023 Nicklas Hansen, Hao Su, Xiaolong Wang

TD-MPC is a model-based reinforcement learning (RL) algorithm that performs local trajectory optimization in the latent space of a learned implicit (decoder-free) world model.

Continuous Control Model-based Reinforcement Learning +1

Finetuning Offline World Models in the Real World

no code implementations24 Oct 2023 Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, Xiaolong Wang

In this work, we seek to get the best of both worlds: we consider the problem of pretraining a world model with offline data collected on a real robot, and then finetuning the model on online data collected by planning with the learned model.

Offline RL Reinforcement Learning (RL)

GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields

1 code implementation31 Aug 2023 Yanjie Ze, Ge Yan, Yueh-Hua Wu, Annabella Macaluso, Yuying Ge, Jianglong Ye, Nicklas Hansen, Li Erran Li, Xiaolong Wang

To incorporate semantics in 3D, the reconstruction module utilizes a vision-language foundation model ($\textit{e. g.}$, Stable Diffusion) to distill rich semantic information into the deep 3D voxel.

Decision Making

MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

1 code implementation12 Dec 2022 Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran

We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework.

Model-based Reinforcement Learning reinforcement-learning +1

On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning

1 code implementation19 Oct 2022 Yifan Xu, Nicklas Hansen, ZiRui Wang, Yung-Chieh Chan, Hao Su, Zhuowen Tu

Reinforcement Learning (RL) algorithms can solve challenging control problems directly from image observations, but they often require millions of environment interactions to do so.

Atari Games 100k Model-based Reinforcement Learning +2

Visual Reinforcement Learning with Self-Supervised 3D Representations

1 code implementation13 Oct 2022 Yanjie Ze, Nicklas Hansen, Yinbo Chen, Mohit Jain, Xiaolong Wang

A prominent approach to visual Reinforcement Learning (RL) is to learn an internal state representation using self-supervised methods, which has the potential benefit of improved sample-efficiency and generalization through additional learning signal and inductive biases.

reinforcement-learning Reinforcement Learning (RL) +2

Graph Inverse Reinforcement Learning from Diverse Videos

no code implementations28 Jul 2022 Sateesh Kumar, Jonathan Zamora, Nicklas Hansen, Rishabh Jangir, Xiaolong Wang

Research on Inverse Reinforcement Learning (IRL) from third-person videos has shown encouraging results on removing the need for manual reward design for robotic tasks.

reinforcement-learning Reinforcement Learning (RL) +1

Temporal Difference Learning for Model Predictive Control

2 code implementations9 Mar 2022 Nicklas Hansen, Xiaolong Wang, Hao Su

Data-driven model predictive control has two key advantages over model-free methods: a potential for improved sample efficiency through model learning, and better performance as computational budget for planning increases.

Continuous Control Model Predictive Control

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

no code implementations19 Jan 2022 Rishabh Jangir, Nicklas Hansen, Sambaran Ghosal, Mohit Jain, Xiaolong Wang

We propose a setting for robotic manipulation in which the agent receives visual feedback from both a third-person camera and an egocentric camera mounted on the robot's wrist.

Reinforcement Learning (RL)

Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers

1 code implementation ICLR 2022 Ruihan Yang, Minghao Zhang, Nicklas Hansen, Huazhe Xu, Xiaolong Wang

Our key insight is that proprioceptive states only offer contact measurements for immediate reaction, whereas an agent equipped with visual sensory observations can learn to proactively maneuver environments with obstacles and uneven terrain by anticipating changes in the environment many steps ahead.

Reinforcement Learning (RL)

Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation

3 code implementations NeurIPS 2021 Nicklas Hansen, Hao Su, Xiaolong Wang

Our method greatly improves stability and sample efficiency of ConvNets under augmentation, and achieves generalization results competitive with state-of-the-art methods for image-based RL in environments with unseen visuals.

Data Augmentation Q-Learning +1

Generalization in Reinforcement Learning by Soft Data Augmentation

2 code implementations26 Nov 2020 Nicklas Hansen, Xiaolong Wang

Extensive efforts have been made to improve the generalization ability of Reinforcement Learning (RL) methods via domain randomization and data augmentation.

Data Augmentation reinforcement-learning +1

Self-Supervised Policy Adaptation during Deployment

2 code implementations ICLR 2021 Nicklas Hansen, Rishabh Jangir, Yu Sun, Guillem Alenyà, Pieter Abbeel, Alexei A. Efros, Lerrel Pinto, Xiaolong Wang

A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal.

Short Term Blood Glucose Prediction based on Continuous Glucose Monitoring Data

no code implementations6 Feb 2020 Ali Mohebbi, Alexander R. Johansen, Nicklas Hansen, Peter E. Christensen, Jens M. Tarp, Morten L. Jensen, Henrik Bengtsson, Morten Mørup

In this context, we evaluate both population-based and patient-specific RNNs and contrast them to patient-specific ARIMA models and a simple baseline predicting future observations as the last observed.

Management Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.