no code implementations • 22 Mar 2024 • Yinggui Wang, Wei Huang, Le Yang
Thus, the SLU system needs to ensure that a potential malicious attacker cannot deduce the sensitive attributes of the users, while it should avoid greatly compromising the SLU accuracy.
no code implementations • 14 Mar 2024 • Yinggui Wang, Yuanqing Huang, Jianshu Li, Le Yang, Kai Song, Lei Wang
Specifically, face images are masked in the frequency domain using an adaptive MixUp strategy.
no code implementations • 24 Jan 2024 • Yuanqing Huang, Huilong Chen, Yinggui Wang, Lei Wang
To the best of our knowledge, the proposed attack model is the very first in the literature developed for FR models without a classification layer.
no code implementations • 18 Jan 2024 • Wei Huang, Yinggui Wang, Anda Cheng, Aihui Zhou, Chaofan Yu, Lei Wang
In this paper, we propose a secure distributed LLM based on model slicing.
no code implementations • 10 Nov 2023 • Mingyuan Fan, Xiaodan Li, Cen Chen, Yinggui Wang
We reveal that input regularization based methods make resultant adversarial examples biased towards flat extreme regions.
no code implementations • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
To address this challenge, we extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.
1 code implementation • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts.