no code implementations • 10 Aug 2023 • Blerta Lindqvist
We apply and evaluate the GBDT symmetry defense for nine datasets against six perturbation attacks with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries.
no code implementations • 8 Oct 2022 • Blerta Lindqvist
To classify an image when adversaries are unaware of the defense, we apply symmetry to the image and use the classification label of the symmetric image.
no code implementations • 21 Jun 2021 • Blerta Lindqvist
Motivated by instances that we find where strong attacks do not transfer, we delve into adversarial examples at pixel level to scrutinize how adversarial attacks affect image pixel values.
no code implementations • 9 Feb 2021 • Blerta Lindqvist
Using adversarial samples against attacks that do not minimize perturbation, Target Training exceeds current best defense ($69. 1$%) with $76. 4$% against CW-L$_2$($\kappa=40$) in CIFAR10.
no code implementations • 1 Jan 2021 • Blerta Lindqvist
Our Target Training defense tricks the minimization at the core of untargeted, gradient-based adversarial attacks: minimize the sum of (1) perturbation and (2) classifier adversarial loss.
no code implementations • 8 Jun 2020 • Blerta Lindqvist
Our Target Training defense tricks the minimization at the core of untargeted, gradient-based adversarial attacks: minimize the sum of (1) perturbation and (2) classifier adversarial loss.
no code implementations • 4 Feb 2020 • Blerta Lindqvist, Rauf Izmailov
Our Minimax adversarial approach presents a significant shift in defense strategy for neural network classifiers.
no code implementations • 8 Dec 2018 • Blerta Lindqvist, Shridatt Sugrim, Rauf Izmailov
For different magnitudes of perturbation in training and testing, AutoGAN can surpass the accuracy of FGSM method by up to 25\% points on samples perturbed using FGSM.