Inference Attack
87 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Inference Attack
Libraries
Use these libraries to find Inference Attack models and implementationsLatest papers
Data Origin Inference in Machine Learning
We formally define the data origin and the data origin inference task in the development of the ML model (mainly neural networks).
Deep Regression Unlearning
In the last few years, there have been notable developments in machine unlearning to remove the information of certain training data efficiently and effectively from ML models.
M^4I: Multi-modal Models Membership Inference
To achieve this, we propose Multi-modal Models Membership Inference (M^4I) with two attack methods to infer the membership status, named metric-based (MB) M^4I and feature-based (FB) M^4I, respectively.
Does CLIP Know My Face?
Our large-scale experiments on CLIP demonstrate that individuals used for training can be identified with very high accuracy.
Are Attribute Inference Attacks Just Imputation?
Our main conclusions are: (1) previous attribute inference methods do not reveal more about the training data from the model than can be inferred by an adversary without access to the trained model, but with the same knowledge of the underlying distribution as needed to train the attribute inference attack; (2) black-box attribute inference attacks rarely learn anything that cannot be learned without the model; but (3) white-box attacks, which we introduce and evaluate in the paper, can reliably identify some records with the sensitive value attribute that would not be predicted without having access to the model.
SNAP: Efficient Extraction of Private Properties with Poisoning
Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model.
Inferring Sensitive Attributes from Model Explanations
We focus on the specific privacy risk of attribute inference attack wherein an adversary infers sensitive attributes of an input (e. g., race and sex) given its model explanations.
A Hybrid Self-Supervised Learning Framework for Vertical Federated Learning
In this work, we propose a Federated Hybrid Self-Supervised Learning framework, named FedHSSL, that utilizes cross-party views (i. e., dispersed features) of samples aligned among parties and local views (i. e., augmentation) of unaligned samples within each party to improve the representation learning capability of the VFL joint model.
An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models
Tabular data typically contains private and important information; thus, precautions must be taken before they are shared with others.
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment
By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm.