1 code implementation • 9 May 2023 • Georg Siedel, Weijia Shao, Silvia Vock, Andrey Morozov
In the field of adversarial robustness of image classifiers, robustness is commonly defined as the stability of a model to all input changes within a p-norm distance.
no code implementations • 27 Jun 2022 • Georg Siedel, Silvia Vock, Andrey Morozov, Stefan Voß
Furthermore, we observe unexpected optima in classifiers robust accuracy through training and testing classifiers with different levels of noise.