1 code implementation • 27 Nov 2023 • Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, QiuLing Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang
Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training.
no code implementations • 7 Aug 2023 • QiuLing Xu, Pannaga Shivaswamy, Xiangyu Zhang
We subsequently use that metric in an adversarial learning framework to systematically promote disadvantaged items.
1 code implementation • 16 Jan 2023 • Siyuan Cheng, Guanhong Tao, Yingqi Liu, Shengwei An, Xiangzhe Xu, Shiwei Feng, Guangyu Shen, Kaiyuan Zhang, QiuLing Xu, Shiqing Ma, Xiangyu Zhang
Attack forensics, a critical counter-measure for traditional cyber attacks, is hence of importance for defending model backdoor attacks.
no code implementations • CVPR 2023 • QiuLing Xu, Guanhong Tao, Jean Honorio, Yingqi Liu, Shengwei An, Guangyu Shen, Siyuan Cheng, Xiangyu Zhang
It trains the clone model from scratch on a very small subset of samples and aims to minimize a cloning loss that denotes the differences between the activations of important neurons across the two models.
1 code implementation • 23 Oct 2022 • Kaiyuan Zhang, Guanhong Tao, QiuLing Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, Xiangyu Zhang
In this work, we theoretically analyze the connection among cross-entropy loss, attack success rate, and clean accuracy in this setting.
no code implementations • 18 Jun 2022 • Guanhong Tao, Yingqi Liu, Siyuan Cheng, Shengwei An, Zhuo Zhang, QiuLing Xu, Guangyu Shen, Xiangyu Zhang
As such, using the samples derived from our attack in adversarial training can harden a model against these backdoor vulnerabilities.
no code implementations • 11 Feb 2022 • Guangyu Shen, Yingqi Liu, Guanhong Tao, QiuLing Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, Xiangyu Zhang
We develop a novel optimization method for NLPbackdoor inversion.
1 code implementation • CVPR 2022 • QiuLing Xu, Guanhong Tao, Xiangyu Zhang
We propose a novel adversarial attack targeting content features in some deep layer, that is, individual neurons in the layer.
no code implementations • CVPR 2022 • Guanhong Tao, Guangyu Shen, Yingqi Liu, Shengwei An, QiuLing Xu, Shiqing Ma, Pan Li, Xiangyu Zhang
A popular trigger inversion method is by optimization.
1 code implementation • 9 Feb 2021 • Guangyu Shen, Yingqi Liu, Guanhong Tao, Shengwei An, QiuLing Xu, Siyuan Cheng, Shiqing Ma, Xiangyu Zhang
By iteratively and stochastically selecting the most promising labels for optimization with the guidance of an objective function, we substantially reduce the complexity, allowing to handle models with many classes.
no code implementations • 1 Jul 2020 • QiuLing Xu, Kevin Bello, Jean Honorio
Robustness of machine learning methods is essential for modern practical applications.