no code implementations • 16 Mar 2024 • Jiyuan Fu, Zhaoyu Chen, Kaixun Jiang, Haijing Guo, Jiafeng Wang, Shuyong Gao, Wenqiang Zhang
Existing work rarely studies the transferability of attacks on VLP models, resulting in a substantial performance gap from white-box attacks.
no code implementations • 14 Mar 2024 • Lingyi Hong, Shilin Yan, Renrui Zhang, Wanyun Li, Xinyu Zhou, Pinxue Guo, Kaixun Jiang, Yiting Chen, Jinglun Li, Zhaoyu Chen, Wenqiang Zhang
To evaluate the effectiveness of our general framework OneTracker, which is consisted of Foundation Tracker and Prompt Tracker, we conduct extensive experiments on 6 popular tracking tasks across 11 benchmarks and our OneTracker outperforms other models and achieves state-of-the-art performance.
no code implementations • 2 Feb 2024 • Zhaoyu Chen, Zhengyang Shan, Jingwen Chang, Kaixun Jiang, Dingkang Yang, Yiting Cheng, Wenqiang Zhang
We conduct adversarial robustness evaluation on 5 models from Cityscapes and ADE20K under 8 attacks.
no code implementations • 18 Oct 2023 • Zhaoyu Chen, Bo Li, Kaixun Jiang, Shuang Wu, Shouhong Ding, Wenqiang Zhang
Further, the fake faces by our method can pass face forgery detection and face recognition, which exposes the security problems of face forgery detectors.
no code implementations • ICCV 2023 • Kaixun Jiang, Zhaoyu Chen, Hao Huang, Jiafeng Wang, Dingkang Yang, Bo Li, Yan Wang, Wenqiang Zhang
First, STDE introduces target videos as patch textures and only adds patches on keyframes that are adaptively selected by temporal difference.
no code implementations • 21 Mar 2023 • Yuzheng Wang, Zhaoyu Chen, Dingkang Yang, Pinxue Guo, Kaixun Jiang, Wenqiang Zhang, Lizhe Qi
Adversarial Robustness Distillation (ARD) is a promising task to solve the issue of limited adversarial robustness of small capacity models while optimizing the expensive computational costs of Adversarial Training (AT).
2 code implementations • 21 Nov 2022 • Jiafeng Wang, Zhaoyu Chen, Kaixun Jiang, Dingkang Yang, Lingyi Hong, Pinxue Guo, Haijing Guo, Wenqiang Zhang
To tackle these issues, we propose Global Momentum Initialization (GI) to suppress gradient elimination and help search for the global optimum.