Search Results for author: Emil C. Lupu

Found 21 papers, 7 papers with code

Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization

no code implementations2 Jun 2023 Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu

We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem.

Bilevel Optimization Data Poisoning

Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks

no code implementations19 Apr 2022 Kenneth T. Co, David Martinez-Rego, Zhongyuan Hau, Emil C. Lupu

In this work, we propose a novel approach, Jacobian Ensembles-a combination of Jacobian regularization and model ensembles to significantly increase the robustness against UAPs whilst maintaining or improving model accuracy.

Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters

no code implementations23 May 2021 Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu

Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.

Bilevel Optimization regression

Real-time Detection of Practical Universal Adversarial Perturbations

no code implementations16 May 2021 Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Emil C. Lupu

Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs).

Blocking Image Classification +2

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

1 code implementation21 Apr 2021 Kenneth T. Co, David Martinez Rego, Emil C. Lupu

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data.

Robustness and Transferability of Universal Attacks on Compressed Models

1 code implementation10 Dec 2020 Alberto G. Matachana, Kenneth T. Co, Luis Muñoz-González, David Martinez, Emil C. Lupu

In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.

Neural Network Compression Quantization

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

no code implementations28 Feb 2020 Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu

We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters by modelling the attack as a multiobjective bilevel optimisation problem.

Bilevel Optimization Data Poisoning +2

Universal Adversarial Robustness of Texture and Shape-Biased Models

1 code implementation23 Nov 2019 Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Ben Glocker, Emil C. Lupu

Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise.

Adversarial Robustness Image Classification

Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

no code implementations11 Sep 2019 Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu

Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets.

BIG-bench Machine Learning Federated Learning

Poisoning Attacks with Generative Adversarial Nets

1 code implementation18 Jun 2019 Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu

In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i. e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.

BIG-bench Machine Learning Data Poisoning

Sensitivity of Deep Convolutional Networks to Gabor Noise

1 code implementation ICML Workshop Deep_Phenomen 2019 Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset.

An Argumentation-Based Reasoner to Assist Digital Investigation and Attribution of Cyber-Attacks

no code implementations30 Apr 2019 Erisa Karafili, Linna Wang, Emil C. Lupu

In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process.

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

2 code implementations30 Sep 2018 Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, Emil C. Lupu

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time.

Bayesian Optimization

Mitigation of Adversarial Attacks through Embedded Feature Selection

no code implementations16 Aug 2018 Ziyi Bao, Luis Muñoz-González, Emil C. Lupu

We propose a design methodology to evaluate the security of machine learning classifiers with embedded feature selection against adversarial examples crafted using different attack strategies.

BIG-bench Machine Learning feature selection

Label Sanitization against Label Flipping Poisoning Attacks

no code implementations2 Mar 2018 Andrea Paudice, Luis Muñoz-González, Emil C. Lupu

Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points.

Data Poisoning

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

1 code implementation8 Feb 2018 Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu

We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack.

Anomaly Detection BIG-bench Machine Learning +3

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

no code implementations29 Aug 2017 Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli

This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.

Data Poisoning Handwritten Digit Recognition +1

Argumentation-based Security for Social Good

no code implementations1 May 2017 Erisa Karafili, Antonis C. Kakas, Nikolaos I. Spanoudakis, Emil C. Lupu

The increase of connectivity and the impact it has in every day life is raising new and existing security problems that are becoming important for social good.

Decision Making

Efficient Attack Graph Analysis through Approximate Inference

no code implementations22 Jun 2016 Luis Muñoz-González, Daniele Sgandurra, Andrea Paudice, Emil C. Lupu

We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages of approximate inference techniques to scale to larger attack graphs.

Bayesian Inference Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.