no code implementations • 20 Nov 2023 • Evan Rose, Fnu Suya, David Evans
Machine learning is susceptible to poisoning attacks, in which an attacker controls a small fraction of the training data and chooses that data with the goal of inducing some behavior unintended by the model developer in the trained model.
1 code implementation • 26 Oct 2023 • Fnu Suya, Anshuman Suri, Tingwei Zhang, Jingtao Hong, Yuan Tian, David Evans
However, these works make different assumptions on the adversary's knowledge and current literature lacks a cohesive organization centered around the threat model.
1 code implementation • CVPR 2023 • Yulong Tian, Fnu Suya, Anshuman Suri, Fengyuan Xu, David Evans
We demonstrate attacks in which an adversary can manipulate the upstream model to conduct highly effective and specific property inference attacks (AUC score $> 0. 9$), without incurring significant performance loss on the main task.
1 code implementation • 30 Apr 2021 • Yulong Tian, Fnu Suya, Fengyuan Xu, David Evans
In a backdoor attack on a machine learning model, an adversary produces a model that performs well on normal inputs but outputs targeted misclassifications on inputs containing a small trigger pattern.
1 code implementation • 30 Jun 2020 • Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, Yuan Tian
Our attack is the first model-targeted poisoning attack that provides provable convergence for convex models, and in our experiments, it either exceeds or matches state-of-the-art attacks in terms of attack success rate and distance to the target model.
1 code implementation • 22 Apr 2020 • Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, Qinghua Zheng
Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
1 code implementation • 19 Aug 2019 • Fnu Suya, Jianfeng Chi, David Evans, Yuan Tian
In a black-box setting, the adversary only has API access to the target model and each query is expensive.
Cryptography and Security
1 code implementation • 23 Dec 2017 • Fnu Suya, Yuan Tian, David Evans, Paolo Papotti
Specifically, we consider the problem of attacking machine learning classifiers subject to a budget of feature modification cost while minimizing the number of queries, where each query returns only a class and confidence score.