Search Results for author: Zachary N. Sunberg

Found 10 papers, 4 papers with code

Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning

no code implementations25 Feb 2024 Matt-Heun Hong, Zachary N. Sunberg, Danielle Albers Szafir

In an evaluation with twelve scientists, we found that Cieran effectively modeled user preferences to rank colormaps and leveraged this model to create new quality designs.

Feasibility-Guided Safety-Aware Model Predictive Control for Jump Markov Linear Systems

no code implementations21 Oct 2023 Zakariya Laouar, Rayan Mazouz, Tyler Becker, Qi Heng Ho, Zachary N. Sunberg

In this paper, we present a framework that synthesizes maximally safe control policies for Jump Markov Linear Systems subject to stochastic mode switches.

Model Predictive Control

Recursively-Constrained Partially Observable Markov Decision Processes

no code implementations15 Oct 2023 Qi Heng Ho, Tyler Becker, Benjamin Kraske, Zakariya Laouar, Martin S. Feather, Federico Rossi, Morteza Lahijanian, Zachary N. Sunberg

Evaluations on a set of benchmark problems demonstrate the efficacy of our algorithm and show that policies for RC-POMDPs produce more desirable behaviors than policies for C-POMDPs.

Explanation through Reward Model Reconciliation using POMDP Tree Search

no code implementations1 May 2023 Benjamin D. Kraske, Anshu Saksena, Anna L. Buczak, Zachary N. Sunberg

As artificial intelligence (AI) algorithms are increasingly used in mission-critical applications, promoting user-trust of these systems will be essential to their success.

Sampling-based Reactive Synthesis for Nondeterministic Hybrid Systems

no code implementations14 Apr 2023 Qi Heng Ho, Zachary N. Sunberg, Morteza Lahijanian

This paper introduces a sampling-based strategy synthesis algorithm for nondeterministic hybrid systems with complex continuous dynamics under temporal and reachability constraints.

Motion Planning

Optimality Guarantees for Particle Belief Approximation of POMDPs

1 code implementation10 Oct 2022 Michael H. Lim, Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin, Zachary N. Sunberg

Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces.

Automaton-Guided Control Synthesis for Signal Temporal Logic Specifications

no code implementations8 Jul 2022 Qi Heng Ho, Roland B. Ilyes, Zachary N. Sunberg, Morteza Lahijanian

This paper presents an algorithmic framework for control synthesis of continuous dynamical systems subject to signal temporal logic (STL) specifications.

Compositional Learning-based Planning for Vision POMDPs

1 code implementation17 Dec 2021 Sampada Deglurkar, Michael H. Lim, Johnathan Tucker, Zachary N. Sunberg, Aleksandra Faust, Claire J. Tomlin

The Partially Observable Markov Decision Process (POMDP) is a powerful framework for capturing decision-making problems that involve state and transition uncertainty.

Decision Making

Voronoi Progressive Widening: Efficient Online Solvers for Continuous State, Action, and Observation POMDPs

1 code implementation18 Dec 2020 Michael H. Lim, Claire J. Tomlin, Zachary N. Sunberg

This paper introduces Voronoi Progressive Widening (VPW), a generalization of Voronoi optimistic optimization (VOO) and action progressive widening to partially observable Markov decision processes (POMDPs).

Sparse tree search optimality guarantees in POMDPs with continuous observation spaces

1 code implementation10 Oct 2019 Michael H. Lim, Claire J. Tomlin, Zachary N. Sunberg

Partially observable Markov decision processes (POMDPs) with continuous state and observation spaces have powerful flexibility for representing real-world decision and control problems but are notoriously difficult to solve.

Cannot find the paper you are looking for? You can Submit a new open access paper.