Search Results for author: Mingli Zhu

Found 5 papers, 0 papers with code

BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning

no code implementations26 Jan 2024 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Mingli Zhu, Ruotong Wang, Li Liu, Chao Shen

We hope that our efforts could build a solid foundation of backdoor learning to facilitate researchers to investigate existing algorithms, develop more innovative algorithms, and explore the intrinsic mechanism of backdoor learning.

Backdoor Attack

Enhanced Few-Shot Class-Incremental Learning via Ensemble Models

no code implementations14 Jan 2024 Mingli Zhu, Zihao Zhu, Sihong Chen, Chen Chen, Baoyuan Wu

To tackle overfitting challenge, we design a new ensemble model framework cooperated with data augmentation to boost generalization.

Data Augmentation Few-Shot Class-Incremental Learning +2

Defenses in Adversarial Machine Learning: A Survey

no code implementations13 Dec 2023 Baoyuan Wu, Shaokui Wei, Mingli Zhu, Meixi Zheng, Zihao Zhu, Mingda Zhang, Hongrui Chen, Danni Yuan, Li Liu, Qingshan Liu

Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce inconsistent and incomprehensible predictions with humans at some particular cases.

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

no code implementations20 Nov 2023 Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang

This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.

Backdoor Attack Contrastive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.