2 code implementations • CVPR 2023 • Jianping Zhang, Yizhan Huang, Weibin Wu, Michael R. Lyu
However, the variance of the back-propagated gradients in intermediate blocks of ViTs may still be large, which may make the generated adversarial samples focus on some model-specific features and get stuck in poor local optima.
1 code implementation • 11 Feb 2023 • Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang, Shuqing Li, Pinjia He, Michael Lyu
In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% to 5. 9% EFR) while maintaining the accuracy on the original test set.
no code implementations • 16 Aug 2022 • Shihurong Yao, Yizhan Huang, Xiaogang Xu
RANLEN uses a dynamically designed mask-based normalization operation, which enhances an image in a spatially varying manner, ensuring that the enhancement results are consistent with the requirements specified by the input mask.
2 code implementations • CVPR 2022 • Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, Michael R. Lyu
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples.