no code implementations • 19 May 2022 • Leo Schwinn, Leon Bungert, An Nguyen, René Raab, Falk Pulsmeyer, Doina Precup, Björn Eskofier, Dario Zanca
The reliability of neural networks is essential for their use in safety-critical applications.
no code implementations • 21 May 2021 • Leo Schwinn, René Raab, An Nguyen, Dario Zanca, Bjoern Eskofier
Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community.
1 code implementation • 23 Mar 2021 • Leon Bungert, René Raab, Tim Roith, Leo Schwinn, Daniel Tenbrinck
Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability.
1 code implementation • 24 Feb 2021 • Leo Schwinn, An Nguyen, René Raab, Leon Bungert, Daniel Tenbrinck, Dario Zanca, Martin Burger, Bjoern Eskofier
The susceptibility of deep neural networks to untrustworthy predictions, including out-of-distribution (OOD) data and adversarial examples, still prevent their widespread use in safety-critical applications.
no code implementations • 5 Nov 2020 • Leo Schwinn, An Nguyen, René Raab, Dario Zanca, Bjoern Eskofier, Daniel Tenbrinck, Martin Burger
We empirically show that by incorporating this nonlocal gradient information, we are able to give a more accurate estimation of the global descent direction on noisy and non-convex loss surfaces.
no code implementations • 24 Feb 2020 • Leo Schwinn, René Raab, Björn Eskofier
Further, we add a learnable regularization step prior to the neural network, which we call Pixelwise Noise Injection Layer (PNIL).