Search Results for author: Michael Tuttle

Found 1 papers, 1 papers with code

Robustifying $\ell_\infty$ Adversarial Training to the Union of Perturbation Models

1 code implementation NeurIPS 2021 Ameya D. Patil, Michael Tuttle, Alexander G. Schwing, Naresh R. Shanbhag

Classical adversarial training (AT) frameworks are designed to achieve high adversarial accuracy against a single attack type, typically $\ell_\infty$ norm-bounded perturbations.

Cannot find the paper you are looking for? You can Submit a new open access paper.