no code implementations • 19 Jan 2024 • Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng, Di Wang
We start by giving a rigorous definition of \textit{exact} federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data.
no code implementations • 20 Aug 2023 • Shuzhen Chen, Yuan Yuan, Youming Tao, Zhipeng Cai, Dongxiao Yu
Distributed stochastic optimization methods based on Newton's method offer significant advantages over first-order methods by leveraging curvature information for improved performance.
no code implementations • 18 Mar 2023 • Youming Tao, Sijia Cui, Wenlu Xu, Haofei Yin, Dongxiao Yu, Weifa Liang, Xiuzhen Cheng
To address this issue, we study the stochastic convex and non-convex optimization problem for federated learning at edge and show how to handle heavy-tailed data while retaining the Byzantine resilience, communication efficiency and the optimal statistical error rates simultaneously.
no code implementations • 4 Jun 2021 • Youming Tao, Yulian Wu, Peng Zhao, Di Wang
Finally, we establish the lower bound to show that the instance-dependent regret of our improved algorithm is optimal.
no code implementations • 15 Nov 2020 • Youming Tao, Shuzhen Chen, Feng Li, Dongxiao Yu, Jiguo Yu, Hao Sheng
In this paper, we study a distributed privacy-preserving learning problem in social networks with general topology.