Search Results for author: Ian Char

Found 12 papers, 4 papers with code

Full Shot Predictions for the DIII-D Tokamak via Deep Recurrent Networks

no code implementations18 Apr 2024 Ian Char, Youngseog Chung, Joseph Abbate, Egemen Kolemen, Jeff Schneider

Although tokamaks are one of the most promising devices for realizing nuclear fusion as an energy source, there are still key obstacles when it comes to understanding the dynamics of the plasma and controlling it.

Near-optimal Policy Identification in Active Reinforcement Learning

no code implementations19 Dec 2022 Xiang Li, Viraj Mehta, Johannes Kirschner, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, Ilija Bogunovic

Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces.

Bayesian Optimization reinforcement-learning +1

Exploration via Planning for Information about the Optimal Trajectory

1 code implementation6 Oct 2022 Viraj Mehta, Ian Char, Joseph Abbate, Rory Conlin, Mark D. Boyer, Stefano Ermon, Jeff Schneider, Willie Neiswanger

In this work, we develop a method that allows us to plan for exploration while taking both the task and the current knowledge about the dynamics into account.

Reinforcement Learning (RL)

How Useful are Gradients for OOD Detection Really?

no code implementations20 May 2022 Conor Igoe, Youngseog Chung, Ian Char, Jeff Schneider

One critical challenge in deploying highly performant machine learning models in real-life applications is out of distribution (OOD) detection.

Computational Efficiency Misconceptions +1

BATS: Best Action Trajectory Stitching

no code implementations26 Apr 2022 Ian Char, Viraj Mehta, Adam Villaflor, John M. Dolan, Jeff Schneider

Past efforts for developing algorithms in this area have revolved around introducing constraints to online reinforcement learning algorithms to ensure the actions of the learned policy are constrained to the logged data.

reinforcement-learning Reinforcement Learning (RL)

Deep Attentive Variational Inference

no code implementations ICLR 2022 Ifigeneia Apostolopoulou, Ian Char, Elan Rosenfeld, Artur Dubrawski

Moreover, the architecture for this class of models favors local interactions among the latent variables between neighboring layers when designing the conditioning factors of the involved distributions.

Variational Inference

Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification

1 code implementation21 Sep 2021 Youngseog Chung, Ian Char, Han Guo, Jeff Schneider, Willie Neiswanger

With increasing deployment of machine learning systems in various real-world tasks, there is a greater need for accurate quantification of predictive uncertainty.

BIG-bench Machine Learning Uncertainty Quantification

Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification

2 code implementations NeurIPS 2021 Youngseog Chung, Willie Neiswanger, Ian Char, Jeff Schneider

However, this loss restricts the scope of applicable regression models, limits the ability to target many desirable properties (e. g. calibration, sharpness, centered intervals), and may produce poor conditional quantiles.

regression Uncertainty Quantification

Neural Dynamical Systems: Balancing Structure and Flexibility in Physical Prediction

no code implementations23 Jun 2020 Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider

We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models in various gray-box settings which incorporates prior knowledge in the form of systems of ordinary differential equations.

Neural Dynamical Systems

no code implementations ICLR Workshop DeepDiffEq 2019 Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider

We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models which incorporates prior knowledge in the form of systems of ordinary differential equations.

Offline Contextual Bayesian Optimization

1 code implementation NeurIPS 2019 Ian Char, Youngseog Chung, Willie Neiswanger, Kirthevasan Kandasamy, Oak Nelson, Mark Boyer, Egemen Kolemen

In black-box optimization, an agent repeatedly chooses a configuration to test, so as to find an optimal configuration.

Bayesian Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.