no code implementations • NeurIPS 2021 • Lorenzo Noci, Gregor Bachmann, Kevin Roth, Sebastian Nowozin, Thomas Hofmann
Recent works on Bayesian neural networks (BNNs) have highlighted the need to better understand the implications of using Gaussian priors in combination with the compositional structure of the network architecture.
no code implementations • NeurIPS 2021 • Lorenzo Noci, Kevin Roth, Gregor Bachmann, Sebastian Nowozin, Thomas Hofmann
The dataset curation hypothesis of Aitchison (2020): we show empirically that the CPE does not arise in a real curated data set but can be produced in a controlled experiment with varying curation strength.
no code implementations • ICML Workshop AML 2021 • Kevin Roth
This approach is however inherently limited, as it says little about the robustness of the model against more powerful attacks not included in the evaluation.
no code implementations • ICML 2020 • Jakub Swiatkowski, Kevin Roth, Bastiaan S. Veeling, Linh Tran, Joshua V. Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
Variational Bayesian Inference is a popular methodology for approximating posterior distributions over Bayesian neural network weights.
1 code implementation • ICML 2020 • Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD.
1 code implementation • 14 Jan 2020 • Linh Tran, Bastiaan S. Veeling, Kevin Roth, Jakub Swiatkowski, Joshua V. Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Sebastian Nowozin, Rodolphe Jenatton
As a result, the diversity of the ensemble predictions, stemming from each member, is lost.
no code implementations • 25 Sep 2019 • Jakub Świątkowski, Kevin Roth, Bastiaan S. Veeling, Linh Tran, Joshua V. Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
Variational Bayesian Inference is a popular methodology for approximating posterior distributions in Bayesian neural networks.
no code implementations • 25 Sep 2019 • Kevin Roth, Yannic Kilcher, Thomas Hofmann
We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks.
no code implementations • NeurIPS 2020 • Kevin Roth, Yannic Kilcher, Thomas Hofmann
We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks.
1 code implementation • 13 Feb 2019 • Kevin Roth, Yannic Kilcher, Thomas Hofmann
We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack.
no code implementations • 22 May 2018 • Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, Thomas Hofmann
We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks vis-a-vis adversarial perturbations.
1 code implementation • NeurIPS 2017 • Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, Thomas Hofmann
Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters.