no code implementations • 7 Dec 2019 • Malhar Jere, Sandro Herbig, Christine Lind, Farinaz Koushanfar
Deep Neural Networks for image classification have been found to be vulnerable to adversarial samples, which consist of sub-perceptual noise added to a benign image that can easily fool trained neural networks, posing a significant risk to their commercial deployment.