no code implementations • 26 Jul 2023 • Antoine Wehenkel, Jens Behrmann, Andrew C. Miller, Guillermo Sapiro, Ozan Sener, Marco Cuturi, Jörn-Henrik Jacobsen
Over the past decades, hemodynamics simulators have steadily evolved and have become tools of choice for studying cardiovascular systems in-silico.
1 code implementation • 8 Feb 2022 • Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen
Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.
no code implementations • ICML Workshop INNF 2021 • Niklas Koenen, Marvin N. Wright, Peter Maaß, Jens Behrmann
Normalizing flows leverage the Change of Variables Formula (CVF) to define flexible density models.
1 code implementation • 16 Jun 2020 • Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger Grosse, Jörn-Henrik Jacobsen
For problems where global invertibility is necessary, such as applying normalizing flows on OOD data, we show the importance of designing stable INN building blocks.
1 code implementation • ICML 2020 • Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen
Adversarial examples are malicious inputs crafted to induce misclassification.
no code implementations • 10 Dec 2019 • Christian Etmann, Maximilian Schmidt, Jens Behrmann, Tobias Boskamp, Lena Hauberg-Lotte, Annette Peter, Rita Casadonte, Jörg Kriegsmann, Peter Maass
Neural networks have recently been established as a viable classification method for imaging mass spectrometry data for tumor typing.
no code implementations • 25 Sep 2019 • Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger B. Grosse, Jörn-Henrik Jacobsen
Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks.
4 code implementations • NeurIPS 2019 • Ricky T. Q. Chen, Jens Behrmann, David Duvenaud, Jörn-Henrik Jacobsen
Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood.
Ranked #2 on Image Generation on MNIST
no code implementations • ICLR 2019 • Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maass
We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping.
5 code implementations • 2 Nov 2018 • Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, Jörn-Henrik Jacobsen
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation.
Ranked #5 on Image Generation on MNIST
no code implementations • ICLR 2019 • Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge
Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs.
no code implementations • 25 Jun 2018 • Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maaß
Studying the invertibility of deep neural networks (DNNs) provides a principled approach to better understand the behavior of these powerful models.
no code implementations • 2 May 2017 • Jens Behrmann, Christian Etmann, Tobias Boskamp, Rita Casadonte, Jörg Kriegsmann, Peter Maass
Deep learning offers an approach to learn feature extraction and classification combined in a single model.