no code implementations • 12 Mar 2024 • Janina Schreiber, Pau Batlle, Damar Wicaksono, Michael Hecht
We introduce a surrogate-based black-box optimization method, termed Polynomial-model-based optimization (PMBO).
1 code implementation • 15 Sep 2023 • Chethan Krishnamurthy Ramanaik, Juan-Esteban Suarez Cardona, Anna Willmann, Pia Hanfeld, Nico Hoffmann, Michael Hecht
Revisiting this classic enables to prove that regularised autoencoders ensure a one-to-one re-embedding of the initial data manifold to its latent representation.
no code implementations • 1 Sep 2023 • Janina Schreiber, Damar Wicaksono, Michael Hecht
For a wide range of applications the structure of systems like Neural Networks or complex simulations, is unknown and approximation is costly or even impossible.
no code implementations • 12 Jan 2023 • Juan-Esteban Suarez Cardona, Phil-Alexander Hofmann, Michael Hecht
In contrast to PINNs, the PSMs result in a convex optimisation problem for a vast class of PDEs, including all linear ones, in which case the PSM-approximate is efficiently computable due to the exponential convergence rate of the underlying variational gradient descent.
no code implementations • 23 Nov 2022 • Juan Esteban Suarez Cardona, Michael Hecht
We present a novel class of approximations for variational losses, being applicable for the training of physics-informed neural nets (PINNs).
1 code implementation • 10 Jun 2021 • Nishant Kumar, Pia Hanfeld, Michael Hecht, Michael Bussmann, Stefan Gumhold, Nico Hoffmann
Normalizing flows are prominent deep generative models that provide tractable probability distributions and efficient density estimation.