no code implementations • 25 May 2024 • Mingli Zhu, Siyuan Liang, Baoyuan Wu
Surprisingly, we find that the original backdoors still exist in defense models derived from existing post-training defense strategies, and the backdoor existence is measured by a novel metric called backdoor existence coefficient.
no code implementations • 26 Jan 2024 • Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Mingli Zhu, Ruotong Wang, Li Liu, Chao Shen
We hope that our efforts could build a solid foundation of backdoor learning to facilitate researchers to investigate existing algorithms, develop more innovative algorithms, and explore the intrinsic mechanism of backdoor learning.
no code implementations • 14 Jan 2024 • Mingli Zhu, Zihao Zhu, Sihong Chen, Chen Chen, Baoyuan Wu
To tackle overfitting challenge, we design a new ensemble model framework cooperated with data augmentation to boost generalization.
no code implementations • 13 Dec 2023 • Baoyuan Wu, Shaokui Wei, Mingli Zhu, Meixi Zheng, Zihao Zhu, Mingda Zhang, Hongrui Chen, Danni Yuan, Li Liu, Qingshan Liu
Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce inconsistent and incomprehensible predictions with humans at some particular cases.
no code implementations • 20 Nov 2023 • Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.
no code implementations • ICCV 2023 • Mingli Zhu, Shaokui Wei, Li Shen, Yanbo Fan, Baoyuan Wu
Fine-tuning based on benign data is a natural defense to erase the backdoor effect in a backdoored model.