Search Results for author: Xixu Hu

Found 6 papers, 4 papers with code

SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization

no code implementations2 Jan 2024 Xixu Hu, Runkai Zheng, Jindong Wang, Cheuk Hang Leung, Qi Wu, Xing Xie

In this study, we address this gap by introducing SpecFormer, specifically designed to enhance ViTs' resilience against adversarial attacks, with support from carefully derived theoretical guarantees.

Computational Efficiency

Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning

1 code implementation ICCV 2023 Kaijie Zhu, Jindong Wang, Xixu Hu, Xing Xie, Ge Yang

The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module.

Adversarial Robustness

Deep into The Domain Shift: Transfer Learning through Dependence Regularization

1 code implementation31 May 2023 Shumin Ma, Zhiri Yuan, Qi Wu, Yiyan Huang, Xixu Hu, Cheuk Hang Leung, Dongdong Wang, Zhixiang Huang

This paper proposes a new domain adaptation approach in which one can measure the differences in the internal dependence structure separately from those in the marginals.

Domain Adaptation Transfer Learning

FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning

1 code implementation27 Feb 2023 Wang Lu, Xixu Hu, Jindong Wang, Xing Xie

Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.

Federated Learning Privacy Preserving

On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective

1 code implementation22 Feb 2023 Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie

In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.

Adversarial Robustness Chatbot +1

Cannot find the paper you are looking for? You can Submit a new open access paper.