Search Results for author: Georg Siedel

Found 2 papers, 1 papers with code

Investigating the Corruption Robustness of Image Classifiers with Random Lp-norm Corruptions

1 code implementation9 May 2023 Georg Siedel, Weijia Shao, Silvia Vock, Andrey Morozov

In the field of adversarial robustness of image classifiers, robustness is commonly defined as the stability of a model to all input changes within a p-norm distance.

Adversarial Robustness Data Augmentation +1

Utilizing Class Separation Distance for the Evaluation of Corruption Robustness of Machine Learning Classifiers

no code implementations27 Jun 2022 Georg Siedel, Silvia Vock, Andrey Morozov, Stefan Voß

Furthermore, we observe unexpected optima in classifiers robust accuracy through training and testing classifiers with different levels of noise.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.