In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Our experimental evaluation shows that while models trained without privacy mechanisms are vulnerable to membership inference attacks in balanced prior settings, there appears to be negligible privacy risk in the skewed prior setting.
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.
To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.
Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.
Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.