Search Results for author: Zetao Lin

Found 1 papers, 0 papers with code

Breaking the Black-Box: Confidence-Guided Model Inversion Attack for Distribution Shift

no code implementations28 Feb 2024 Xinhao Liu, Yingzhao Jiang, Zetao Lin

Model inversion attacks (MIAs) seek to infer the private training data of a target classifier by generating synthetic images that reflect the characteristics of the target class through querying the model.

Generative Adversarial Network

Cannot find the paper you are looking for? You can Submit a new open access paper.