We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions.
Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making.
In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost.
Computational models in fields such as computational neuroscience are often evaluated via stochastic simulation or numerical approximation.
The prior over functions is defined implicitly by the mean and covariance function, which determine the smoothness and variability of the function.
In this study, we present a method to make a complex tree ensemble interpretable by simplifying the model.
Rather than assuming that a word embedding is fixed across the entire text collection, as in standard word embedding methods, in our Bayesian model we generate it from a word-specific prior density for each occurrence of a given word.
This work tackles the problem of learning a set of language specific acoustic units from unlabeled speech recordings given a set of labeled recordings from other languages.
Next to voxel-wise uncertainty, we introduce four metrics to quantify structure-wise uncertainty in segmentation for quality control.