1 code implementation • 29 Sep 2023 • Jiayuan Ye, Anastasia Borovykh, Soufiane Hayou, Reza Shokri
We introduce an analytical framework to quantify the changes in a machine learning algorithm's output distribution following the inclusion of a few data points in its training set, a notion we define as leave-one-out distinguishability (LOOD).
1 code implementation • 11 Sep 2023 • Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri
Repeated parameter sharing in federated learning causes significant information leakage about private data, thus defeating its main purpose: data privacy.
no code implementations • 10 Mar 2022 • Jiayuan Ye, Reza Shokri
We prove that, in these settings, our privacy bound converges exponentially fast and is substantially smaller than the composition bounds, notably after a few number of training epochs.
1 code implementation • 18 Nov 2021 • Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, Reza Shokri
Membership inference attacks are used as an auditing tool to quantify this leakage.
no code implementations • 29 Sep 2021 • Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Reza Shokri
In this paper, we present a framework that can explain the implicit assumptions and also the simplifications made in the prior work.
no code implementations • NeurIPS 2021 • Rishav Chourasia, Jiayuan Ye, Reza Shokri
What is the information leakage of an iterative randomized learning algorithm about its training data, when the internal state of the algorithm is \emph{private}?