no code implementations • 2 Jun 2023 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem.
no code implementations • 23 May 2021 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.
no code implementations • 28 Feb 2020 • Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters by modelling the attack as a multiobjective bilevel optimisation problem.
1 code implementation • 18 Jun 2019 • Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu
In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i. e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.