no code implementations • 19 Mar 2024 • Cheng-Long Wang, Qi Li, Zihang Xiang, Yinzhi Cao, Di Wang
Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups.
no code implementations • 20 Feb 2024 • Zihang Xiang, Chenglong Wang, Di Wang
Recent works propose a generic private solution for the tuning process, yet a fundamental question still persists: is the current privacy bound for this solution tight?
no code implementations • 12 Nov 2023 • Zihang Xiang, Tianhao Wang, Di Wang
In this study, we propose a solution that specifically addresses the issue of node-level privacy.
no code implementations • 12 Oct 2023 • Hanpu Shen, Cheng-Long Wang, Zihang Xiang, Yiming Ying, Di Wang
This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node.
1 code implementation • 15 Apr 2023 • Zihang Xiang, Tianhao Wang, WanYu Lin, Di Wang
In contrast, we leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks.