no code implementations • 4 Dec 2022 • Momin Ahmad Khan, Virat Shejwalkar, Amir Houmansadr, Fatima Muhammad Anwar
We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality.
no code implementations • 4 Oct 2022 • Virat Shejwalkar, Arun Ganesh, Rajiv Mathews, Om Thakkar, Abhradeep Thakurta
Empirically, we show that the last few checkpoints can provide a reasonable lower bound for the variance of a converged DP model.
no code implementations • 15 Oct 2021 • Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal
The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility.
no code implementations • 8 Oct 2021 • Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr
The FRL server uses a voting mechanism to aggregate the parameter rankings submitted by clients in each training epoch to generate the global ranking of the next training epoch.
no code implementations • 29 Sep 2021 • Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr
FSL clients share local subnetworks in the form of rankings of network edges; more useful edges have higher ranks.
1 code implementation • 23 Aug 2021 • Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage
While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.
no code implementations • 2 Oct 2020 • Vasisht Duddu, Antoine Boutet, Virat Shejwalkar
We choose quantization as design choice for highly efficient and private models.
no code implementations • 24 Dec 2019 • Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr
Collaborative (federated) learning enables multiple parties to train a model without sharing their private data, but through repeated sharing of the parameters of their local models.
no code implementations • 15 Jun 2019 • Virat Shejwalkar, Amir Houmansadr
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset.