Inference Attack

86 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Inference Attack models and implementations

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk

vita-group/shake-to-leak 14 Mar 2024

While diffusion models have recently demonstrated remarkable progress in generating realistic images, privacy risks also arise: published models or APIs could generate training images and thus leak privacy-sensitive training information.

1
14 Mar 2024

Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks

leilynourbakhsh/inf2guard 4 Mar 2024

Machine learning (ML) is vulnerable to inference (e. g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset.

2
04 Mar 2024

Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment

jiepku/safecompress 2 Jan 2024

To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance.

1
02 Jan 2024

User Consented Federated Recommender System Against Personalized Attribute Inference Attack

hkust-knowcomp/uc-fedrec 23 Dec 2023

However, the recommendation model learned by a common FedRec may still be vulnerable to private information leakage risks, particularly attribute inference attacks, which means that the attacker can easily infer users' personal attributes from the learned model.

0
23 Dec 2023

DUCK: Distance-based Unlearning via Centroid Kinematics

ocram17/duck 4 Dec 2023

Machine Unlearning is rising as a new field, driven by the pressing necessity of ensuring privacy in modern artificial intelligence models.

9
04 Dec 2023

Addressing Membership Inference Attack in Federated Learning with Model Compression

negedng/ma-fl-mia 29 Nov 2023

In this paper, we show that the effectiveness of these attacks on the clients negatively correlates with the size of the client datasets and model complexity.

1
29 Nov 2023

MIA-BAD: An Approach for Enhancing Membership Inference Attack and its Mitigation with Federated Learning

soumyaxyz/Privacy-Preserving-Federated-Learning 28 Nov 2023

In this paper, we propose an enhanced Membership Inference Attack with the Batch-wise generated Attack Dataset (MIA-BAD), a modification to the MIA approach.

1
28 Nov 2023

Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models

minxingzhang/miagm 30 Oct 2023

Several membership inference attacks (MIAs) have been proposed to exhibit the privacy vulnerability of generative models by classifying a query image as a training dataset member or nonmember.

1
30 Oct 2023

No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML

ziqi-zhang/teeslice-artifact 11 Oct 2023

These solutions, referred to as TEE-Shielded DNN Partition (TSDP), partition a DNN model into two parts, offloading the privacy-insensitive part to the GPU while shielding the privacy-sensitive part within the TEE.

14
11 Oct 2023

SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems

s3l-official/slmia-sr 14 Sep 2023

Our attack is versatile and can work in both white-box and black-box scenarios.

4
14 Sep 2023