Search Results for author: Hengwei Zhang

Found 4 papers, 1 papers with code

Adversarial Example Soups: Improving Transferability and Stealthiness for Free

no code implementations27 Feb 2024 Bo Yang, Hengwei Zhang, Jindong Wang, Yulong Yang, Chenhao Lin, Chao Shen, Zhengyu Zhao

Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge.

Adversarial example generation with AdaBelief Optimizer and Crop Invariance

no code implementations7 Feb 2021 Bo Yang, Hengwei Zhang, Yuchen Zhang, Kaiyong Xu, Jindong Wang

ABI-FGM and CIM can be readily integrated to build a strong gradient-based attack to further boost the success rates of adversarial examples for black-box attacks.

Random Transformation of Image Brightness for Adversarial Attack

1 code implementation12 Jan 2021 Bo Yang, Kaiyong Xu, Hengjun Wang, Hengwei Zhang

Before deep neural networks are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications.

Adversarial Attack Image Augmentation

Boosting Adversarial Attacks on Neural Networks with Better Optimizer

no code implementations1 Dec 2020 Heng Yin, Hengwei Zhang, Jindong Wang, Ruiyu Dou

However, the success rate of adversarial attacks can be further improved in black-box environments.

Cannot find the paper you are looking for? You can Submit a new open access paper.