Search Results for author: Todd Huster

Found 7 papers, 0 papers with code

ReFace: Real-time Adversarial Attacks on Face Recognition Systems

no code implementations9 Jun 2022 Shehzeen Hussain, Todd Huster, Chris Mesterharm, Paarth Neekhara, Kevin An, Malhar Jere, Harshvardhan Sikka, Farinaz Koushanfar

We find that the white-box attack success rate of a pure U-Net ATN falls substantially short of gradient-based attacks like PGD on large face recognition datasets.

Face Identification Face Recognition +1

Scaled-Time-Attention Robust Edge Network

no code implementations9 Jul 2021 Richard Lau, Lihan Yao, Todd Huster, William Johnson, Stephen Arleth, Justin Wong, Devin Ridge, Michael Fletcher, William C. Headley

We demonstrate that STARE is applicable to a variety of applications with improved performance and lower implementation complexity.

Time Series Prediction

TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation

no code implementations18 Mar 2021 Todd Huster, Emmanuel Ekwedike

Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model.

Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions

no code implementations22 Jan 2021 Todd Huster, Jeremy E. J. Cohen, Zinan Lin, Kevin Chan, Charles Kamhoua, Nandi Leslie, Cho-Yu Jason Chiang, Vyas Sekar

A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features.

Epidemiology Open-Ended Question Answering

Universal Lipschitz Approximation in Bounded Depth Neural Networks

no code implementations9 Apr 2019 Jeremy E. J. Cohen, Todd Huster, Ra Cohen

Adversarial attacks against machine learning models are a rather hefty obstacle to our increasing reliance on these models.

BIG-bench Machine Learning

Limitations of the Lipschitz constant as a defense against adversarial examples

no code implementations25 Jul 2018 Todd Huster, Cho-Yu Jason Chiang, Ritu Chadha

Several recent papers have discussed utilizing Lipschitz constants to limit the susceptibility of neural networks to adversarial examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.