Search Results for author: Kaixun Jiang

Found 7 papers, 1 papers with code

OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning

no code implementations14 Mar 2024 Lingyi Hong, Shilin Yan, Renrui Zhang, Wanyun Li, Xinyu Zhou, Pinxue Guo, Kaixun Jiang, Yiting Chen, Jinglun Li, Zhaoyu Chen, Wenqiang Zhang

To evaluate the effectiveness of our general framework OneTracker, which is consisted of Foundation Tracker and Prompt Tracker, we conduct extensive experiments on 6 popular tracking tasks across 11 benchmarks and our OneTracker outperforms other models and achieves state-of-the-art performance.

Object Visual Object Tracking

Exploring Decision-based Black-box Attacks on Face Forgery Detection

no code implementations18 Oct 2023 Zhaoyu Chen, Bo Li, Kaixun Jiang, Shuang Wu, Shouhong Ding, Wenqiang Zhang

Further, the fake faces by our method can pass face forgery detection and face recognition, which exposes the security problems of face forgery detectors.

Face Recognition

Efficient Decision-based Black-box Patch Attacks on Video Recognition

no code implementations ICCV 2023 Kaixun Jiang, Zhaoyu Chen, Hao Huang, Jiafeng Wang, Dingkang Yang, Bo Li, Yan Wang, Wenqiang Zhang

First, STDE introduces target videos as patch textures and only adds patches on keyframes that are adaptively selected by temporal difference.

Video Recognition

Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation

no code implementations21 Mar 2023 Yuzheng Wang, Zhaoyu Chen, Dingkang Yang, Pinxue Guo, Kaixun Jiang, Wenqiang Zhang, Lizhe Qi

Adversarial Robustness Distillation (ARD) is a promising task to solve the issue of limited adversarial robustness of small capacity models while optimizing the expensive computational costs of Adversarial Training (AT).

Adversarial Robustness Knowledge Distillation +1

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

2 code implementations21 Nov 2022 Jiafeng Wang, Zhaoyu Chen, Kaixun Jiang, Dingkang Yang, Lingyi Hong, Pinxue Guo, Haijing Guo, Wenqiang Zhang

To tackle these issues, we propose Global Momentum Initialization (GI) to suppress gradient elimination and help search for the global optimum.

Cannot find the paper you are looking for? You can Submit a new open access paper.