Search Results for author: Michael H. Lim

Found 7 papers, 5 papers with code

Multi-Agent Reachability Calibration with Conformal Prediction

no code implementations2 Apr 2023 Anish Muthali, Haotian Shen, Sampada Deglurkar, Michael H. Lim, Rebecca Roelofs, Aleksandra Faust, Claire Tomlin

We investigate methods to provide safety assurances for autonomous agents that incorporate predictions of other, uncontrolled agents' behavior into their own trajectory planning.

Autonomous Driving Conformal Prediction +2

Optimality Guarantees for Particle Belief Approximation of POMDPs

1 code implementation10 Oct 2022 Michael H. Lim, Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin, Zachary N. Sunberg

Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces.

Navigation between initial and desired community states using shortcuts

2 code implementations15 Apr 2022 Benjamin W. Blonder, Michael H. Lim, Zachary Sunberg, Claire Tomlin

Using several empirical datasets, we show that (1) non-brute-force navigation is only possible between some state pairs, (2) shortcuts exist between many state pairs; and (3) changes in abundance and richness are the strongest predictors of shortcut existence, independent of dataset and algorithm choices.

Management

Compositional Learning-based Planning for Vision POMDPs

1 code implementation17 Dec 2021 Sampada Deglurkar, Michael H. Lim, Johnathan Tucker, Zachary N. Sunberg, Aleksandra Faust, Claire J. Tomlin

The Partially Observable Markov Decision Process (POMDP) is a powerful framework for capturing decision-making problems that involve state and transition uncertainty.

Decision Making

Voronoi Progressive Widening: Efficient Online Solvers for Continuous State, Action, and Observation POMDPs

1 code implementation18 Dec 2020 Michael H. Lim, Claire J. Tomlin, Zachary N. Sunberg

This paper introduces Voronoi Progressive Widening (VPW), a generalization of Voronoi optimistic optimization (VOO) and action progressive widening to partially observable Markov decision processes (POMDPs).

Sparse tree search optimality guarantees in POMDPs with continuous observation spaces

1 code implementation10 Oct 2019 Michael H. Lim, Claire J. Tomlin, Zachary N. Sunberg

Partially observable Markov decision processes (POMDPs) with continuous state and observation spaces have powerful flexibility for representing real-world decision and control problems but are notoriously difficult to solve.

Cannot find the paper you are looking for? You can Submit a new open access paper.