Search Results for author: Matthew Walter

Found 6 papers, 1 papers with code

Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning

no code implementations8 Sep 2023 David Yunis, Justin Jung, Falcon Dai, Matthew Walter

Exploration in sparse-reward reinforcement learning is difficult due to the requirement of long, coordinated sequences of actions in order to achieve any reward.

reinforcement-learning

To the Noise and Back: Diffusion for Shared Autonomy

no code implementations23 Feb 2023 Takuma Yoneda, Luzhe Sun, and Ge Yang, Bradly Stadie, Matthew Walter

Traditional approaches to shared autonomy rely on knowledge of the environment dynamics, a discrete space of user goals that is known a priori, or knowledge of the user's policy -- assumptions that are unrealistic in many domains.

Continuous Control Reinforcement Learning (RL)

Depth Field Networks for Generalizable Multi-view Scene Representation

no code implementations28 Jul 2022 Vitor Guizilini, Igor Vasiljevic, Jiading Fang, Rares Ambrus, Greg Shakhnarovich, Matthew Walter, Adrien Gaidon

Modern 3D computer vision leverages learning to boost geometric reasoning, mapping image data to classical structures such as cost volumes or epipolar constraints to improve matching.

Data Augmentation Depth Estimation +2

Grasp and Motion Planning for Dexterous Manipulation for the Real Robot Challenge

2 code implementations8 Jan 2021 Takuma Yoneda, Charles Schaff, Takahiro Maeda, Matthew Walter

This report describes our winning submission to the Real Robot Challenge (https://real-robot-challenge. com/).

Motion Planning

Maximum Expected Hitting Cost of a Markov Decision Process and Informativeness of Rewards

no code implementations NeurIPS 2019 Falcon Dai, Matthew Walter

By analyzing the change in the maximum expected hitting cost, this work presents a formal understanding of the effect of potential-based reward shaping on regret (and sample complexity) in the undiscounted average reward setting.

Informativeness

Cannot find the paper you are looking for? You can Submit a new open access paper.