Search Results for author: Yulong Yang

Found 2 papers, 0 papers with code

Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning

no code implementations15 Oct 2023 Yulong Yang, Chenhao Lin, Xiang Ji, Qiwei Tian, Qian Li, Hongshan Yang, Zhibo Wang, Chao Shen

Instead, a one-shot adversarial augmentation prior to training is sufficient, and we name this new defense paradigm Data-centric Robust Learning (DRL).

Fairness

Hard Adversarial Example Mining for Improving Robust Fairness

no code implementations3 Aug 2023 Chenhao Lin, Xiang Ji, Yulong Yang, Qian Li, Chao Shen, Run Wang, Liming Fang

Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AE).

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.