Search Results for author: Weilin Xu

Found 6 papers, 4 papers with code

Robust Principles: Architectural Design Principles for Adversarially Robust CNNs

1 code implementation30 Aug 2023 Shengyun Peng, Weilin Xu, Cory Cornelius, Matthew Hull, Kevin Li, Rahul Duggal, Mansi Phute, Jason Martin, Duen Horng Chau

Our research aims to unify existing works' diverging opinions on how architectural components affect the adversarial robustness of CNNs.

Adversarial Robustness

RobArch: Designing Robust Architectures against Adversarial Attacks

1 code implementation8 Jan 2023 Shengyun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin

Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs).

Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models

no code implementations22 Aug 2022 Xinlei He, Zheng Li, Weilin Xu, Cory Cornelius, Yang Zhang

Finally, we find that data augmentation degrades the performance of existing attacks to a larger extent, and we propose an adaptive attack using augmentation to train shadow and attack models that improve attack performance.

Data Augmentation

Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples

1 code implementation30 May 2017 Weilin Xu, David Evans, Yanjun Qi

Feature squeezing is a recently-introduced framework for mitigating and detecting adversarial examples.

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

2 code implementations Network and Distributed System Security Symposium 2018 Weilin Xu, David Evans, Yanjun Qi

Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by \emph{adversarial examples} that are generated by adding small but purposeful distortions to natural examples.

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

no code implementations22 Feb 2017 Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi

By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.