Search Results for author: Bryan Chan

Found 7 papers, 3 papers with code

ACPO: AI-Enabled Compiler-Driven Program Optimization

no code implementations15 Dec 2023 Amir H. Ashouri, Muhammad Asif Manzoor, Duc Minh Vu, Raymond Zhang, Ziwen Wang, Angel Zhang, Bryan Chan, Tomasz S. Czajkowski, Yaoqing Gao

The key to performance optimization of a program is to decide correctly when a certain transformation should be applied by a compiler.

A Statistical Guarantee for Representation Transfer in Multitask Imitation Learning

no code implementations2 Nov 2023 Bryan Chan, Karime Pereida, James Bergstra

Transferring representation for multitask imitation learning has the potential to provide improved sample efficiency on learning new tasks, when compared to learning from scratch.

Imitation Learning

Learning from Guided Play: Improving Exploration for Adversarial Imitation Learning with Simple Auxiliary Tasks

1 code implementation30 Dec 2022 Trevor Ablett, Bryan Chan, Jonathan Kelly

In this work, we show that the standard, naive approach to exploration can manifest as a suboptimal local maximum if a policy learned with AIL sufficiently matches the expert distribution without fully learning the desired task.

Imitation Learning

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning

1 code implementation16 Dec 2021 Trevor Ablett, Bryan Chan, Jonathan Kelly

We present Learning from Guided Play (LfGP), a framework in which we leverage expert demonstrations of, in addition to a main task, multiple auxiliary tasks.

Imitation Learning Transfer Learning

Heteroscedastic Uncertainty for Robust Generative Latent Dynamics

1 code implementation18 Aug 2020 Oliver Limoyo, Bryan Chan, Filip Marić, Brandon Wagstaff, Rupam Mahmood, Jonathan Kelly

Learning or identifying dynamics from a sequence of high-dimensional observations is a difficult challenge in many domains, including reinforcement learning and control.

Cannot find the paper you are looking for? You can Submit a new open access paper.