Search Results for author: Chris Mesterharm

Found 4 papers, 0 papers with code

ReFace: Real-time Adversarial Attacks on Face Recognition Systems

no code implementations9 Jun 2022 Shehzeen Hussain, Todd Huster, Chris Mesterharm, Paarth Neekhara, Kevin An, Malhar Jere, Harshvardhan Sikka, Farinaz Koushanfar

We find that the white-box attack success rate of a pure U-Net ATN falls substantially short of gradient-based attacks like PGD on large face recognition datasets.

Face Identification Face Recognition +1

Privacy Leakage Avoidance with Switching Ensembles

no code implementations18 Nov 2019 Rauf Izmailov, Peter Lin, Chris Mesterharm, Samyadeep Basu

We consider membership inference attacks, one of the main privacy issues in machine learning.

BIG-bench Machine Learning

Membership Model Inversion Attacks for Deep Networks

no code implementations9 Oct 2019 Samyadeep Basu, Rauf Izmailov, Chris Mesterharm

With the increasing adoption of AI, inherent security and privacy vulnerabilities formachine learning systems are being discovered.

Optical Character Recognition (OCR)

Subspace Methods That Are Resistant to a Limited Number of Features Corrupted by an Adversary

no code implementations19 Feb 2019 Chris Mesterharm, Rauf Izmailov, Scott Alexander, Simon Tsang

In this paper, we consider batch supervised learning where an adversary is allowed to corrupt instances with arbitrarily large noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.