no code implementations • 9 Mar 2023 • Cambridge Yang, Michael Littman, Michael Carbin
In particular, for the analysis that considers only sample complexity, we prove that if an objective given as an oracle is uniformly continuous, then it is PAC-learnable.
no code implementations • 24 Nov 2021 • Cambridge Yang, Michael Littman, Michael Carbin
In recent years, researchers have made significant progress in devising reinforcement-learning algorithms for optimizing linear temporal logic (LTL) objectives and LTL-like objectives.
no code implementations • AAAI Workshop CLeaR 2022 • Cambridge Yang, Michael Littman, Michael Carbin
In recent years, researchers have made significant progress in devising reinforcement-learning algorithms for optimizing linear temporal logic (LTL) objectives and LTL-like objectives.
1 code implementation • NeurIPS 2019 • Charith Mendis, Cambridge Yang, Yewen Pu, Dr.Saman Amarasinghe, Michael Carbin
We show that the learnt policy produces a vectorization scheme which is better than industry standard compiler heuristics both in terms of static measures and runtime performance.