1 code implementation • 20 Mar 2024 • Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu
It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples).
no code implementations • 26 Feb 2024 • Peiheng Zhou, Ming Hu, Xiaofei Xie, Yihao Huang, Kangjie Chen, Mingsong Chen
Contrastive Language-Image Pre-training (CLIP) model, as an effective pre-trained multimodal neural network, has been widely used in distributed machine learning tasks, especially Federated Learning (FL).
no code implementations • 11 Oct 2023 • Renyang Liu, Wei Zhou, Tianwei Zhang, Kangjie Chen, Jun Zhao, Kwok-Yan Lam
Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models.
1 code implementation • 14 Jul 2023 • Guanlin Li, Kangjie Chen, Yuan Xu, Han Qiu, Tianwei Zhang
We first introduce an oracle into the adversarial training process to help the model learn a correct data-label conditional distribution.
no code implementations • 14 Jun 2023 • Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, Yang Liu
We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets.
no code implementations • 7 Apr 2022 • Xiaoxuan Lou, Guowen Xu, Kangjie Chen, Guanlin Li, Jiwei Li, Tianwei Zhang
Multiplication-less neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations.
no code implementations • ICLR 2022 • Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, Chun Fan
The key feature of our attack is that the adversary does not need prior information about the downstream tasks when implanting the backdoor to the pre-trained model.
no code implementations • 9 Jun 2020 • Kangjie Chen, Shangwei Guo, Tianwei Zhang, Xiaofei Xie, Yang Liu
This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an external adversary to precisely recover a black-box DRL model only from its interaction with the environment.
no code implementations • 14 May 2020 • Jianwen Sun, Tianwei Zhang, Xiaofei Xie, Lei Ma, Yan Zheng, Kangjie Chen, Yang Liu
Adversarial attacks against conventional Deep Learning (DL) systems and algorithms have been widely studied, and various defenses were proposed.