Search Results for author: Zihang Xiang

Found 5 papers, 1 papers with code

Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Approximate Unlearning Completeness

no code implementations19 Mar 2024 Cheng-Long Wang, Qi Li, Zihang Xiang, Yinzhi Cao, Di Wang

Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups.

Computational Efficiency Machine Unlearning +1

How Does Selection Leak Privacy: Revisiting Private Selection and Improved Results for Hyper-parameter Tuning

no code implementations20 Feb 2024 Zihang Xiang, Chenglong Wang, Di Wang

Recent works propose a generic private solution for the tuning process, yet a fundamental question still persists: is the current privacy bound for this solution tight?

Preserving Node-level Privacy in Graph Neural Networks

no code implementations12 Nov 2023 Zihang Xiang, Tianhao Wang, Di Wang

In this study, we propose a solution that specifically addresses the issue of node-level privacy.

Differentially Private Non-convex Learning for Multi-layer Neural Networks

no code implementations12 Oct 2023 Hanpu Shen, Cheng-Long Wang, Zihang Xiang, Yiming Ying, Di Wang

This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node.

Stochastic Optimization

Practical Differentially Private and Byzantine-resilient Federated Learning

1 code implementation15 Apr 2023 Zihang Xiang, Tianhao Wang, WanYu Lin, Di Wang

In contrast, we leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks.

Federated Learning Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.