Search Results for author: René Raab

Found 6 papers, 2 papers with code

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks

no code implementations21 May 2021 Leo Schwinn, René Raab, An Nguyen, Dario Zanca, Bjoern Eskofier

Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community.

CLIP: Cheap Lipschitz Training of Neural Networks

1 code implementation23 Mar 2021 Leon Bungert, René Raab, Tim Roith, Leo Schwinn, Daniel Tenbrinck

Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability.

Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis

1 code implementation24 Feb 2021 Leo Schwinn, An Nguyen, René Raab, Leon Bungert, Daniel Tenbrinck, Dario Zanca, Martin Burger, Bjoern Eskofier

The susceptibility of deep neural networks to untrustworthy predictions, including out-of-distribution (OOD) data and adversarial examples, still prevent their widespread use in safety-critical applications.

Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks

no code implementations5 Nov 2020 Leo Schwinn, An Nguyen, René Raab, Dario Zanca, Bjoern Eskofier, Daniel Tenbrinck, Martin Burger

We empirically show that by incorporating this nonlocal gradient information, we are able to give a more accurate estimation of the global descent direction on noisy and non-convex loss surfaces.

Adversarial Attack

Towards Rapid and Robust Adversarial Training with One-Step Attacks

no code implementations24 Feb 2020 Leo Schwinn, René Raab, Björn Eskofier

Further, we add a learnable regularization step prior to the neural network, which we call Pixelwise Noise Injection Layer (PNIL).

Cannot find the paper you are looking for? You can Submit a new open access paper.