Search Results for author: Hongyuan Zhan

Found 7 papers, 1 papers with code

Privately Customizing Prefinetuning to Better Match User Data in Federated Learning

no code implementations17 Feb 2023 Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Aleksandr Livshits, Giulia Fanti, Daniel Lazar

To this end, we propose FreD (Federated Private Fr\'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.

Federated Learning Language Modelling +2

FedSynth: Gradient Compression via Synthetic Data in Federated Learning

1 code implementation4 Apr 2022 Shengyuan Hu, Jack Goetz, Kshitiz Malik, Hongyuan Zhan, Zhe Liu, Yue Liu

Model compression is important in federated learning (FL) with large models to reduce communication cost.

Federated Learning Model Compression

Papaya: Practical, Private, and Scalable Federated Learning

no code implementations8 Nov 2021 Dzmitry Huba, John Nguyen, Kshitiz Malik, Ruiyu Zhu, Mike Rabbat, Ashkan Yousefpour, Carole-Jean Wu, Hongyuan Zhan, Pavel Ustinov, Harish Srinivas, Kaikai Wang, Anthony Shoumikhin, Jesik Min, Mani Malek

Our work tackles the aforementioned issues, sketches of some of the system design challenges and their solutions, and touches upon principles that emerged from building a production FL system for millions of clients.

Federated Learning

Convex Latent Effect Logit Model via Sparse and Low-rank Decomposition

no code implementations22 Aug 2021 Hongyuan Zhan, Kamesh Madduri, Venkataraman Shankar

In this paper, we propose a convex formulation for learning logistic regression model (logit) with latent heterogeneous effect on sub-population.

Discrete Choice Models regression

Federated Learning with Buffered Asynchronous Aggregation

no code implementations11 Jun 2021 John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek, Dzmitry Huba

On the other hand, asynchronous aggregation of client updates in FL (i. e., asynchronous FL) alleviates the scalability issue.

Federated Learning Privacy Preserving

Evaluating Lottery Tickets Under Distributional Shifts

no code implementations WS 2019 Shrey Desai, Hongyuan Zhan, Ahmed Aly

The Lottery Ticket Hypothesis suggests large, over-parameterized neural networks consist of small, sparse subnetworks that can be trained in isolation to reach a similar (or better) test accuracy.

Inductive Bias

Cannot find the paper you are looking for? You can Submit a new open access paper.