Browse SoTA > Adversarial > Inference Attack

Inference Attack

7 papers with code · Adversarial

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

4 Jun 2018Lab41/cyphercat

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

INFERENCE ATTACK

Revisiting Membership Inference Under Realistic Assumptions

21 May 2020bargavj/EvaluatingDPML

Our experimental evaluation shows that while models trained without privacy mechanisms are vulnerable to membership inference attacks in balanced prior settings, there appears to be negligible privacy risk in the skewed prior setting.

INFERENCE ATTACK

Membership Inference Attacks against Machine Learning Models

18 Oct 2016spring-epfl/mia

We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.

INFERENCE ATTACK

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

24 May 2019inspire-group/privacy-vs-robustness

To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.

ADVERSARIAL DEFENSE INFERENCE ATTACK

Synthesis of Realistic ECG using Generative Adversarial Networks

19 Sep 2019Brophy-E/ECG_GAN_MBD

Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.

INFERENCE ATTACK TIME SERIES

Understanding Membership Inferences on Well-Generalized Learning Models

13 Feb 2018BielStela/membership_inference

Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.

INFERENCE ATTACK