Search Results for author: Blerta Lindqvist

Found 8 papers, 0 papers with code

Symmetry Defense Against XGBoost Adversarial Perturbation Attacks

no code implementations10 Aug 2023 Blerta Lindqvist

We apply and evaluate the GBDT symmetry defense for nine datasets against six perturbation attacks with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries.

Symmetry Defense Against CNN Adversarial Perturbation Attacks

no code implementations8 Oct 2022 Blerta Lindqvist

To classify an image when adversaries are unaware of the defense, we apply symmetry to the image and use the classification label of the symmetric image.

Adversarial Robustness Autonomous Vehicles +1

Delving into the pixels of adversarial samples

no code implementations21 Jun 2021 Blerta Lindqvist

Motivated by instances that we find where strong attacks do not transfer, we delve into adversarial examples at pixel level to scrutinize how adversarial attacks affect image pixel values.

Target Training Does Adversarial Training Without Adversarial Samples

no code implementations9 Feb 2021 Blerta Lindqvist

Using adversarial samples against attacks that do not minimize perturbation, Target Training exceeds current best defense ($69. 1$%) with $76. 4$% against CW-L$_2$($\kappa=40$) in CIFAR10.

Target Training: Tricking Adversarial Attacks to Fail

no code implementations1 Jan 2021 Blerta Lindqvist

Our Target Training defense tricks the minimization at the core of untargeted, gradient-based adversarial attacks: minimize the sum of (1) perturbation and (2) classifier adversarial loss.

Adversarial Defense

Tricking Adversarial Attacks To Fail

no code implementations8 Jun 2020 Blerta Lindqvist

Our Target Training defense tricks the minimization at the core of untargeted, gradient-based adversarial attacks: minimize the sum of (1) perturbation and (2) classifier adversarial loss.

Adversarial Defense

Minimax Defense against Gradient-based Adversarial Attacks

no code implementations4 Feb 2020 Blerta Lindqvist, Rauf Izmailov

Our Minimax adversarial approach presents a significant shift in defense strategy for neural network classifiers.

Generative Adversarial Network

AutoGAN: Robust Classifier Against Adversarial Attacks

no code implementations8 Dec 2018 Blerta Lindqvist, Shridatt Sugrim, Rauf Izmailov

For different magnitudes of perturbation in training and testing, AutoGAN can surpass the accuracy of FGSM method by up to 25\% points on samples perturbed using FGSM.

Generative Adversarial Network

Cannot find the paper you are looking for? You can Submit a new open access paper.