no code implementations • 17 Feb 2023 • Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Aleksandr Livshits, Giulia Fanti, Daniel Lazar
To this end, we propose FreD (Federated Private Fr\'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.
1 code implementation • 4 Apr 2022 • Shengyuan Hu, Jack Goetz, Kshitiz Malik, Hongyuan Zhan, Zhe Liu, Yue Liu
Model compression is important in federated learning (FL) with large models to reduce communication cost.
no code implementations • 8 Nov 2021 • Dzmitry Huba, John Nguyen, Kshitiz Malik, Ruiyu Zhu, Mike Rabbat, Ashkan Yousefpour, Carole-Jean Wu, Hongyuan Zhan, Pavel Ustinov, Harish Srinivas, Kaikai Wang, Anthony Shoumikhin, Jesik Min, Mani Malek
Our work tackles the aforementioned issues, sketches of some of the system design challenges and their solutions, and touches upon principles that emerged from building a production FL system for millions of clients.
no code implementations • 22 Aug 2021 • Hongyuan Zhan, Kamesh Madduri, Venkataraman Shankar
In this paper, we propose a convex formulation for learning logistic regression model (logit) with latent heterogeneous effect on sub-population.
no code implementations • 11 Jun 2021 • John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek, Dzmitry Huba
On the other hand, asynchronous aggregation of client updates in FL (i. e., asynchronous FL) alleviates the scalability issue.
no code implementations • WS 2019 • Shrey Desai, Hongyuan Zhan, Ahmed Aly
The Lottery Ticket Hypothesis suggests large, over-parameterized neural networks consist of small, sparse subnetworks that can be trained in isolation to reach a similar (or better) test accuracy.
no code implementations • 1 Nov 2018 • Hongyuan Zhan, Gabriel Gomes, Xiaoye S. Li, Kamesh Madduri, Kesheng Wu
Computational efficiency is an important consideration for deploying machine learning models for time series prediction in an online setting.