no code implementations • 9 Jun 2022 • Shehzeen Hussain, Todd Huster, Chris Mesterharm, Paarth Neekhara, Kevin An, Malhar Jere, Harshvardhan Sikka, Farinaz Koushanfar
We find that the white-box attack success rate of a pure U-Net ATN falls substantially short of gradient-based attacks like PGD on large face recognition datasets.
no code implementations • 18 Nov 2019 • Rauf Izmailov, Peter Lin, Chris Mesterharm, Samyadeep Basu
We consider membership inference attacks, one of the main privacy issues in machine learning.
no code implementations • 9 Oct 2019 • Samyadeep Basu, Rauf Izmailov, Chris Mesterharm
With the increasing adoption of AI, inherent security and privacy vulnerabilities formachine learning systems are being discovered.
no code implementations • 19 Feb 2019 • Chris Mesterharm, Rauf Izmailov, Scott Alexander, Simon Tsang
In this paper, we consider batch supervised learning where an adversary is allowed to corrupt instances with arbitrarily large noise.