Search Results for author: Cambridge Yang

Found 4 papers, 1 papers with code

Computably Continuous Reinforcement-Learning Objectives are PAC-learnable

no code implementations9 Mar 2023 Cambridge Yang, Michael Littman, Michael Carbin

In particular, for the analysis that considers only sample complexity, we prove that if an objective given as an oracle is uniformly continuous, then it is PAC-learnable.

General Reinforcement Learning reinforcement-learning +1

On the (In)Tractability of Reinforcement Learning for LTL Objectives

no code implementations24 Nov 2021 Cambridge Yang, Michael Littman, Michael Carbin

In recent years, researchers have made significant progress in devising reinforcement-learning algorithms for optimizing linear temporal logic (LTL) objectives and LTL-like objectives.

reinforcement-learning Reinforcement Learning (RL)

Reinforcement Learning with General LTL Objectives is Intractable

no code implementations AAAI Workshop CLeaR 2022 Cambridge Yang, Michael Littman, Michael Carbin

In recent years, researchers have made significant progress in devising reinforcement-learning algorithms for optimizing linear temporal logic (LTL) objectives and LTL-like objectives.

reinforcement-learning Reinforcement Learning (RL)

Compiler Auto-Vectorization with Imitation Learning

1 code implementation NeurIPS 2019 Charith Mendis, Cambridge Yang, Yewen Pu, Dr.Saman Amarasinghe, Michael Carbin

We show that the learnt policy produces a vectorization scheme which is better than industry standard compiler heuristics both in terms of static measures and runtime performance.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.