no code implementations • 30 Jan 2024 • Robert K. Niven, Laurent Cordier, Ali Mohammad-Djafari, Markus Abel, Markus Quade
For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives Gaussian posterior and evidence distributions, in which the numerator terms can be expressed in terms of the Mahalanobis distance or ``Gaussian norm'' $||\vy-\hat{\vy}||^2_{M^{-1}} = (\vy-\hat{\vy})^\top {M^{-1}} (\vy-\hat{\vy})$, where $\vy$ is a vector variable, $\hat{\vy}$ is its estimator and $M$ is the covariance matrix.
no code implementations • 28 Aug 2023 • Ali Mohammad-Djafari, Ning Chu, Li Wang, Liang Yu
However, accounting for the uncertainties, we need first understand the Bayesian Deep Learning and then, we can see how we can use them for inverse problems.
no code implementations • 29 Jul 2017 • Guillaume Revillon, Ali Mohammad-Djafari, Cyrille Enderli
The classification method focuses on the introduction of a new prior distribution for the model hyper-parameters that gives us the possibility to handle sensitivity of mixture models to initialization and to allow a less restrictive modeling of data.