Search Results for author: Andras Rozsa

Found 12 papers, 1 papers with code

Adversarial Attack on Deep Learning-Based Splice Localization

1 code implementation17 Apr 2020 Andras Rozsa, Zheng Zhong, Terrance E. Boult

Regarding image forensics, researchers have proposed various approaches to detect and/or localize manipulations, such as splices.

Adversarial Attack Adversarial Robustness +1

Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations

no code implementations7 Aug 2019 Andras Rozsa, Terrance E. Boult

On the CIFAR-10 dataset, our approach improves the average accuracy against the six white-box adversarial attacks to 73. 5% from 41. 8% achieved by adversarial training via PGD.

Adversarial Robustness BIG-bench Machine Learning

Facial Attributes: Accuracy and Adversarial Robustness

no code implementations4 Jan 2018 Andras Rozsa, Manuel Günther, Ethan M. Rudd, Terrance E. Boult

Facial attributes, emerging soft biometrics, must be automatically and reliably extracted from images in order to be usable in stand-alone systems.

Adversarial Robustness Attribute +1

Adversarial Robustness: Softmax versus Openmax

no code implementations5 Aug 2017 Andras Rozsa, Manuel Günther, Terrance E. Boult

Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications.

Adversarial Robustness Open Set Learning

Towards Robust Deep Neural Networks with BANG

no code implementations1 Dec 2016 Andras Rozsa, Manuel Gunther, Terrance E. Boult

Machine learning models, including state-of-the-art deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors.

BIG-bench Machine Learning Data Augmentation +1

LOTS about Attacking Deep Features

no code implementations18 Nov 2016 Andras Rozsa, Manuel Günther, Terrance E. Boult

We demonstrate that iterative LOTS is very effective and show that systems utilizing deep features are easier to attack than the end-to-end network.

Adversarial Robustness

AFFACT - Alignment-Free Facial Attribute Classification Technique

no code implementations18 Nov 2016 Manuel Günther, Andras Rozsa, Terrance E. Boult

Using an ensemble of three ResNets, we obtain the new state-of-the-art facial attribute classification error of 8. 00% on the aligned images of the CelebA dataset.

Attribute Classification +3

Are Accuracy and Robustness Correlated?

no code implementations14 Oct 2016 Andras Rozsa, Manuel Günther, Terrance E. Boult

In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015.

BIG-bench Machine Learning General Classification +1

Assessing Threat of Adversarial Examples on Deep Neural Networks

no code implementations13 Oct 2016 Abigail Graese, Andras Rozsa, Terrance E. Boult

Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network.

Binarization General Classification

Are Facial Attributes Adversarially Robust?

no code implementations18 May 2016 Andras Rozsa, Manuel Günther, Ethan M. Rudd, Terrance E. Boult

We show that FFA generates more adversarial examples than other related algorithms, and that DCNNs for certain attributes are generally robust to adversarial inputs, while DCNNs for other attributes are not.

Attribute Attribute Extraction +2

Adversarial Diversity and Hard Positive Generation

no code implementations5 May 2016 Andras Rozsa, Ethan M. Rudd, Terrance E. Boult

Finally, we demonstrate on LeNet and GoogLeNet that fine-tuning with a diverse set of hard positives improves the robustness of these networks compared to training with prior methods of generating adversarial images.

Data Augmentation

A Survey of Stealth Malware: Attacks, Mitigation Measures, and Steps Toward Autonomous Open World Solutions

no code implementations19 Mar 2016 Ethan M. Rudd, Andras Rozsa, Manuel Günther, Terrance E. Boult

While machine learning offers promising potential for increasingly autonomous solutions with improved generalization to new malware types, both at the network level and at the host level, our findings suggest that several flawed assumptions inherent to most recognition algorithms prevent a direct mapping between the stealth malware recognition problem and a machine learning solution.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.