1 code implementation • 16 Mar 2024 • Ziqi Zhou, Minghui Li, Wei Liu, Shengshan Hu, Yechao Zhang, Wei Wan, Lulu Xue, Leo Yu Zhang, Dezhong Yao, Hai Jin
In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models.
no code implementations • 18 Dec 2023 • Wei Wan, Yuxuan Ning, Shengshan Hu, Lulu Xue, Minghui Li, Leo Yu Zhang, Hai Jin
This attack unveils the vulnerabilities in SFL, challenging the conventional belief that SFL is robust against poisoning attacks.
1 code implementation • 30 Nov 2023 • Xianlong Wang, Shengshan Hu, Minghui Li, Zhifei Yu, Ziqi Zhou, Leo Yu Zhang
Through validation experiments that commendably support our hypothesis, we further design a random matrix to boost both $\Theta_{imi}$ and $\Theta_{imc}$, achieving a notable degree of defense effect.
1 code implementation • 14 Aug 2023 • Ziqi Zhou, Shengshan Hu, Minghui Li, Hangtao Zhang, Yechao Zhang, Hai Jin
In this work, we propose AdvCLIP, the first attack framework for generating downstream-agnostic adversarial examples based on cross-modal pre-trained encoders.
1 code implementation • 15 Jul 2023 • Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Minghui Li, Xiaogeng Liu, Wei Wan, Hai Jin
Building on these insights, we explore the impacts of data augmentation and gradient regularization on transferability and identify that the trade-off generally exists in the various training mechanisms, thus building a comprehensive blueprint for the regulation mechanism behind transferability.
2 code implementations • 28 Jun 2023 • Yining Hua, Jiageng Wu, Shixu Lin, Minghui Li, Yujie Zhang, Dinah Foer, Siwen Wang, Peilin Zhou, Jie Yang, Li Zhou
Conclusions: This study advances public health research by implementing a novel, systematic pipeline for curating symptom lexicons from social media data.
no code implementations • 17 May 2023 • Jiageng Wu, Xian Wu, Zhaopeng Qiu, Minghui Li, Yingying Zhang, Yefeng Zheng, Changzheng Yuan, Jie Yang
We systematically evaluate LLMs in the Chinese medical context and develop a novel in-context learning framework to enhance their performance.
1 code implementation • CVPR 2023 • Xiaogeng Liu, Minghui Li, Haoyu Wang, Shengshan Hu, Dengpan Ye, Hai Jin, Libing Wu, Chaowei Xiao
Deep neural networks are proven to be vulnerable to backdoor attacks.
no code implementations • 9 Feb 2023 • Sheng Hong, Minghui Li, Cunhua Pan, Marco Di Renzo, Wei zhang, Lajos Hanzo
A two-step positioning scheme is exploited, where the channel parameters are first acquired, and the position-related parameters are then estimated.
no code implementations • 22 Nov 2022 • Shengshan Hu, Junwei Zhang, Wei Liu, Junhui Hou, Minghui Li, Leo Yu Zhang, Hai Jin, Lichao Sun
In addition, existing attack approaches towards point cloud classifiers cannot be applied to the completion models due to different output forms and attack purposes.
1 code implementation • CVPR 2022 • Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, Libing Wu
While deep face recognition (FR) systems have shown amazing performance in identification and verification, they also arouse privacy concerns for their excessive surveillance on users, especially for public face images widely spread on social networks.
no code implementations • 22 Feb 2020 • Minghui Li, Sherman S. M. Chow, Shengshan Hu, Yuejing Yan, Chao Shen, Qian Wang
This paper proposes a new scheme for privacy-preserving neural network prediction in the outsourced setting, i. e., the server cannot learn the query, (intermediate) results, and the model.
no code implementations • 9 Aug 2014 • Bo Han, Bo He, Rui Nian, Mengmeng Ma, Shujing Zhang, Minghui Li, Amaury Lendasse
Extreme learning machine (ELM) as a neural network algorithm has shown its good performance, such as fast speed, simple structure etc, but also, weak robustness is an unavoidable defect in original ELM for blended data.