Search Results for author: Fnu Suya

Found 8 papers, 7 papers with code

Understanding Variation in Subpopulation Susceptibility to Poisoning Attacks

no code implementations20 Nov 2023 Evan Rose, Fnu Suya, David Evans

Machine learning is susceptible to poisoning attacks, in which an attacker controls a small fraction of the training data and chooses that data with the goal of inducing some behavior unintended by the model developer in the trained model.

SoK: Pitfalls in Evaluating Black-Box Attacks

1 code implementation26 Oct 2023 Fnu Suya, Anshuman Suri, Tingwei Zhang, Jingtao Hong, Yuan Tian, David Evans

However, these works make different assumptions on the adversary's knowledge and current literature lacks a cohesive organization centered around the threat model.

Manipulating Transfer Learning for Property Inference

1 code implementation CVPR 2023 Yulong Tian, Fnu Suya, Anshuman Suri, Fengyuan Xu, David Evans

We demonstrate attacks in which an adversary can manipulate the upstream model to conduct highly effective and specific property inference attacks (AUC score $> 0. 9$), without incurring significant performance loss on the main task.

Transfer Learning

Stealthy Backdoors as Compression Artifacts

1 code implementation30 Apr 2021 Yulong Tian, Fnu Suya, Fengyuan Xu, David Evans

In a backdoor attack on a machine learning model, an adversary produces a model that performs well on normal inputs but outputs targeted misclassifications on inputs containing a small trigger pattern.

Backdoor Attack Model Compression +1

Model-Targeted Poisoning Attacks with Provable Convergence

1 code implementation30 Jun 2020 Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, Yuan Tian

Our attack is the first model-targeted poisoning attack that provides provable convergence for convex models, and in our experiments, it either exceeds or matches state-of-the-art attacks in terms of attack success rate and distance to the target model.

Scalable Attack on Graph Data by Injecting Vicious Nodes

1 code implementation22 Apr 2020 Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, Qinghua Zheng

Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.

Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries

1 code implementation19 Aug 2019 Fnu Suya, Jianfeng Chi, David Evans, Yuan Tian

In a black-box setting, the adversary only has API access to the target model and each query is expensive.

Cryptography and Security

Query-limited Black-box Attacks to Classifiers

1 code implementation23 Dec 2017 Fnu Suya, Yuan Tian, David Evans, Paolo Papotti

Specifically, we consider the problem of attacking machine learning classifiers subject to a budget of feature modification cost while minimizing the number of queries, where each query returns only a class and confidence score.

Bayesian Optimization BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.