no code implementations • 9 Jun 2022 • Shehzeen Hussain, Todd Huster, Chris Mesterharm, Paarth Neekhara, Kevin An, Malhar Jere, Harshvardhan Sikka, Farinaz Koushanfar
We find that the white-box attack success rate of a pure U-Net ATN falls substantially short of gradient-based attacks like PGD on large face recognition datasets.
no code implementations • 9 Jul 2021 • Richard Lau, Lihan Yao, Todd Huster, William Johnson, Stephen Arleth, Justin Wong, Devin Ridge, Michael Fletcher, William C. Headley
We demonstrate that STARE is applicable to a variety of applications with improved performance and lower implementation complexity.
no code implementations • 18 Mar 2021 • Todd Huster, Emmanuel Ekwedike
Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model.
no code implementations • 22 Jan 2021 • Todd Huster, Jeremy E. J. Cohen, Zinan Lin, Kevin Chan, Charles Kamhoua, Nandi Leslie, Cho-Yu Jason Chiang, Vyas Sekar
A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features.
no code implementations • 19 Oct 2020 • Sanjai Narain, Emily Mak, Dana Chee, Todd Huster, Jeremy Cohen, Kishore Pochiraju, Brendan Englot, Niraj K. Jha, Karthik Narayan
Central to the design of many robot systems and their controllers is solving a constrained blackbox optimization problem.
no code implementations • 9 Apr 2019 • Jeremy E. J. Cohen, Todd Huster, Ra Cohen
Adversarial attacks against machine learning models are a rather hefty obstacle to our increasing reliance on these models.
no code implementations • 25 Jul 2018 • Todd Huster, Cho-Yu Jason Chiang, Ritu Chadha
Several recent papers have discussed utilizing Lipschitz constants to limit the susceptibility of neural networks to adversarial examples.