no code implementations • 3 Oct 2023 • Avital Shafran, Ilia Shumailov, Murat A. Erdogdu, Nicolas Papernot
We discover that prior knowledge of the attacker, i. e. access to in-distribution data, dominates other factors like the attack policy the adversary follows to choose which queries to make to the victim model API.
1 code implementation • ICCV 2021 • Avital Shafran, Shmuel Peleg, Yedid Hoshen
Membership inference attacks (MIA) try to detect if data samples were used to train a neural network model, e. g. to detect copyright abuses.
no code implementations • 1 Jan 2021 • Avital Shafran, Shmuel Peleg, Yedid Hoshen
A simple but effective approach for membership attacks can therefore use the reconstruction error.
1 code implementation • 27 Nov 2019 • Avital Shafran, Gil Segev, Shmuel Peleg, Yedid Hoshen
As neural networks revolutionize many applications, significant privacy conflicts between model users and providers emerge.