1 code implementation • 17 Apr 2020 • Andras Rozsa, Zheng Zhong, Terrance E. Boult
Regarding image forensics, researchers have proposed various approaches to detect and/or localize manipulations, such as splices.
no code implementations • 7 Aug 2019 • Andras Rozsa, Terrance E. Boult
On the CIFAR-10 dataset, our approach improves the average accuracy against the six white-box adversarial attacks to 73. 5% from 41. 8% achieved by adversarial training via PGD.
no code implementations • 4 Jan 2018 • Andras Rozsa, Manuel Günther, Ethan M. Rudd, Terrance E. Boult
Facial attributes, emerging soft biometrics, must be automatically and reliably extracted from images in order to be usable in stand-alone systems.
no code implementations • 5 Aug 2017 • Andras Rozsa, Manuel Günther, Terrance E. Boult
Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications.
no code implementations • 1 Dec 2016 • Andras Rozsa, Manuel Gunther, Terrance E. Boult
Machine learning models, including state-of-the-art deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors.
no code implementations • 18 Nov 2016 • Andras Rozsa, Manuel Günther, Terrance E. Boult
We demonstrate that iterative LOTS is very effective and show that systems utilizing deep features are easier to attack than the end-to-end network.
no code implementations • 18 Nov 2016 • Manuel Günther, Andras Rozsa, Terrance E. Boult
Using an ensemble of three ResNets, we obtain the new state-of-the-art facial attribute classification error of 8. 00% on the aligned images of the CelebA dataset.
no code implementations • 14 Oct 2016 • Andras Rozsa, Manuel Günther, Terrance E. Boult
In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015.
no code implementations • 13 Oct 2016 • Abigail Graese, Andras Rozsa, Terrance E. Boult
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network.
no code implementations • 18 May 2016 • Andras Rozsa, Manuel Günther, Ethan M. Rudd, Terrance E. Boult
We show that FFA generates more adversarial examples than other related algorithms, and that DCNNs for certain attributes are generally robust to adversarial inputs, while DCNNs for other attributes are not.
no code implementations • 5 May 2016 • Andras Rozsa, Ethan M. Rudd, Terrance E. Boult
Finally, we demonstrate on LeNet and GoogLeNet that fine-tuning with a diverse set of hard positives improves the robustness of these networks compared to training with prior methods of generating adversarial images.
no code implementations • 19 Mar 2016 • Ethan M. Rudd, Andras Rozsa, Manuel Günther, Terrance E. Boult
While machine learning offers promising potential for increasingly autonomous solutions with improved generalization to new malware types, both at the network level and at the host level, our findings suggest that several flawed assumptions inherent to most recognition algorithms prevent a direct mapping between the stealth malware recognition problem and a machine learning solution.