no code implementations • 8 Jul 2021 • Daniel Park, Haidar Khan, Azer Khan, Alex Gittens, Bülent Yener
Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model in a "white box" setting and to the opposite in a "black box" setting.
no code implementations • ICLR 2020 • Haidar Khan, Daniel Park, Azer Khan, Bülent Yener
Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model and to the opposite "black box" setting.