We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions.
Bayesian Optimization Hierarchical Reinforcement Learning +3
Both Bayesian and varying coefficient models are very useful tools in practice as they can be used to model parameter heterogeneity in a generalizable way.
Applications Methodology
Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making.
The popularity of Bayesian statistical methods has increased dramatically in recent years across many research areas and industrial applications.
Computation
In this paper, we develop a theoretical framework to approximate Bayesian inference for DNNs by imposing a Bernoulli distribution on the model weights.
In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost.
Computational models in fields such as computational neuroscience are often evaluated via stochastic simulation or numerical approximation.
We develop a novel Markov chain Monte Carlo (MCMC) method that exploits a hierarchy of models of increasing complexity to efficiently generate samples from an unnormalized target distribution.
Methodology Computation 62F15, 62M05, 65C05, 65C40
This makes the method particularly effective in scenarios where model fit needs to be assessed for a large number of datasets, so that per-dataset inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems.
This work proposes ``jointly amortized neural approximation'' (JANA) of intractable likelihood functions and posterior densities arising in Bayesian surrogate modeling and simulation-based inference.