Search Results for author: Yijiang Li

Found 13 papers, 5 papers with code

Can 3D Vision-Language Models Truly Understand Natural Language?

1 code implementation21 Mar 2024 Weipeng Deng, Runyu Ding, Jihan Yang, Jiahui Liu, Yijiang Li, Xiaojuan Qi, Edith Ngai

To test the language understandability of 3D-VL models, we first propose a language robustness task for systematically assessing 3D-VL models across various tasks, benchmarking their performance when presented with different language style variants.

Benchmarking

Towards Adversarially Robust Dataset Distillation by Curvature Regularization

no code implementations15 Mar 2024 Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang

Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.

Adversarial Robustness

Approximate Nullspace Augmented Finetuning for Robust Vision Transformers

no code implementations15 Mar 2024 Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang

In this work, we provide a finetuning approach to enhance the robustness of vision transformers inspired by the concept of nullspace from linear algebra.

Gradient-Free Adaptive Global Pruning for Pre-trained Language Models

1 code implementation28 Feb 2024 Guangji Bai, Yijiang Li, Chen Ling, Kibaek Kim, Liang Zhao

The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands.

Computational Efficiency Problem Decomposition

Dataset Distillation via the Wasserstein Metric

no code implementations30 Nov 2023 Haoyang Liu, Yijiang Li, Tiancheng Xing, Vibhu Dalal, Luwei Li, Jingrui He, Haohan Wang

Dataset Distillation (DD) emerges as a powerful strategy to encapsulate the expansive information of large datasets into significantly smaller, synthetic equivalents, thereby preserving model performance with reduced computational overhead.

Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization

no code implementations26 Nov 2023 Jixuan Leng, Yijiang Li, Haohan Wang

SCMD leverages the capabilities of large vision-language models, specifically CLIP, to train a more efficient model, ensuring it acquires robust generalization capabilities across unseen domains.

Domain Generalization

Understanding Adversarial Transferability in Federated Learning

no code implementations1 Oct 2023 Yijiang Li, Ying Gao, Haohan Wang

We investigate the robustness and security issues from a novel and practical setting: a group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients, and only revealing their adversary position after the training to conduct transferable adversarial attacks with their data, which is usually a subset of the data that FL system is trained with.

Federated Learning

Diverse Cotraining Makes Strong Semi-Supervised Segmentor

1 code implementation ICCV 2023 Yijiang Li, Xinjiang Wang, Lihe Yang, Litong Feng, Wayne Zhang, Ying Gao

Deep co-training has been introduced to semi-supervised segmentation and achieves impressive results, yet few studies have explored the working mechanism behind it.

Multi-metrics adaptively identifies backdoors in Federated learning

1 code implementation ICCV 2023 Siquan Huang, Yijiang Li, Chong Chen, Leyu Shi, Ying Gao

To evaluate the effectiveness of our approach, we conduct comprehensive experiments on different datasets under various attack settings, where our method achieves the best defensive performance.

Federated Learning Privacy Preserving

More than Encoder: Introducing Transformer Decoder to Upsample

no code implementations20 Jun 2021 Yijiang Li, Wentian Cai, Ying Gao, Chengming Li, Xiping Hu

The local and detailed feature from the shallower layer such as boundary and tissue texture is particularly more important in medical segmentation compared with natural image segmentation.

Decoder Image Segmentation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.