no code implementations • 15 Dec 2023 • Jens Müller, Lars Kühmichel, Martin Rohbeck, Stefan T. Radev, Ullrich Köthe
In this work, we analyze the conditions under which information about the context of an input $X$ can improve the predictions of deep learning models in new domains.
1 code implementation • 7 Dec 2023 • Lukas Schumacher, Martin Schnuerch, Andreas Voss, Stefan T. Radev
To validate our models, we assess whether the inferred parameter trajectories align with the patterns and sequences of the experimental manipulations.
no code implementations • 17 Nov 2023 • Marvin Schmitt, Stefan T. Radev, Paul-Christian Bürkner
We present multimodal neural posterior estimation (MultiNPE), a method to integrate heterogeneous data from different sources in simulation-based inference with neural networks.
no code implementations • 17 Oct 2023 • Lasse Elsemüller, Hans Olischläger, Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev
In this work, we propose sensitivity-aware amortized Bayesian inference (SA-ABI), a multifaceted approach to efficiently integrate sensitivity analyses into simulation-based inference with neural networks.
no code implementations • 6 Oct 2023 • Marvin Schmitt, Desi R. Ivanova, Daniel Habermann, Ullrich Köthe, Paul-Christian Bürkner, Stefan T. Radev
We propose a method to improve the efficiency and accuracy of amortized Bayesian inference by leveraging universal symmetries in the joint probabilistic model of parameters and data.
1 code implementation • 22 Aug 2023 • Florence Bockting, Stefan T. Radev, Paul-Christian Bürkner
Our results support the claim that our method is largely independent of the underlying model structure and adaptable to various elicitation techniques, including quantile-based, moment-based, and histogram-based methods.
1 code implementation • 17 Mar 2023 • Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Köthe
We investigate a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution.
3 code implementations • 17 Feb 2023 • Stefan T. Radev, Marvin Schmitt, Valentin Pratz, Umberto Picchini, Ullrich Köthe, Paul-Christian Bürkner
This work proposes ``jointly amortized neural approximation'' (JANA) of intractable likelihood functions and posterior densities arising in Bayesian surrogate modeling and simulation-based inference.
2 code implementations • 27 Jan 2023 • Lasse Elsemüller, Martin Schnuerch, Paul-Christian Bürkner, Stefan T. Radev
Bayesian model comparison (BMC) offers a principled approach for assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions.
2 code implementations • 23 Nov 2022 • Lukas Schumacher, Paul-Christian Bürkner, Andreas Voss, Ullrich Köthe, Stefan T. Radev
Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model.
1 code implementation • 13 Oct 2022 • Marvin Schmitt, Stefan T. Radev, Paul-Christian Bürkner
Bayesian model comparison (BMC) offers a principled probabilistic approach to study and rank competing models.
2 code implementations • 16 Dec 2021 • Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev
Neural density estimators have proven remarkably powerful in performing efficient simulation-based Bayesian inference in various research domains.
1 code implementation • 1 Oct 2020 • Stefan T. Radev, Frederik Graw, Simiao Chen, Nico T. Mutters, Vanessa M. Eichel, Till Bärnighausen, Ullrich Köthe
Mathematical models in epidemiology are an indispensable tool to determine the dynamics and important characteristics of infectious diseases.
no code implementations • 8 May 2020 • Stefan T. Radev, Andreas Voss, Eva Marie Wieschen, Paul-Christian Bürkner
As models of cognition grow in complexity and number of parameters, Bayesian inference with standard methods can become intractable, especially when the data-generating model is of unknown analytic form.
1 code implementation • 22 Apr 2020 • Stefan T. Radev, Marco D'Alessandro, Ulf K. Mertens, Andreas Voss, Ullrich Köthe, Paul-Christian Bürkner
This makes the method particularly effective in scenarios where model fit needs to be assessed for a large number of datasets, so that per-dataset inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems.
2 code implementations • 13 Mar 2020 • Stefan T. Radev, Ulf K. Mertens, Andreass Voss, Lynton Ardizzone, Ullrich Köthe
In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics.