1 code implementation • CVPR 2022 • Hadi Salman, Saachi Jain, Eric Wong, Aleksander Mądry
Certified patch defenses can guarantee robustness of an image classifier to arbitrary changes within a bounded contiguous region.
2 code implementations • 11 May 2021 • Eric Wong, Shibani Santurkar, Aleksander Mądry
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.
no code implementations • NeurIPS 2018 • Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry
We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
no code implementations • ICML 2018 • Shibani Santurkar, Ludwig Schmidt, Aleksander Mądry
A basic, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether they are truly able to capture all the fundamental characteristics of the distributions they are trained on.