Search Results for author: Yuzhong Chen

Found 12 papers, 1 papers with code

TransFlower: An Explainable Transformer-Based Model with Flow-to-Flow Attention for Commuting Flow Prediction

1 code implementation23 Feb 2024 Yan Luo, Zhuoyue Wan, Yuzhong Chen, Gengchen Mai, Fu-Lai Chung, Kent Larson

Understanding the link between urban planning and commuting flows is crucial for guiding urban development and policymaking.

Rethinking Personalized Federated Learning with Clustering-based Dynamic Graph Propagation

no code implementations29 Jan 2024 Jiaqi Wang, Yuzhong Chen, Yuhang Wu, Mahashweta Das, Hao Yang, Fenglong Ma

Subsequently, we design a precise personalized model distribution strategy to allow clients to obtain the most suitable model from the server side.

Clustering Personalized Federated Learning

Invariant Graph Transformer

no code implementations13 Dec 2023 Zhe Xu, Menghai Pan, Yuzhong Chen, Huiyuan Chen, Yuchen Yan, Mahashweta Das, Hanghang Tong

Based on the self-attention module, our proposed invariant graph Transformer (IGT) can achieve fine-grained, more specifically, node-level and virtual node-level intervention.

Tackling Diverse Minorities in Imbalanced Classification

no code implementations28 Aug 2023 Kwei-Herng Lai, Daochen Zha, Huiyuan Chen, Mangesh Bendre, Yuzhong Chen, Mahashweta Das, Hao Yang, Xia Hu

Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.

Anomaly Detection Classification +2

Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT

no code implementations29 Apr 2023 Zhenxiang Xiao, Yuzhong Chen, Lu Zhang, Junjie Yao, Zihao Wu, Xiaowei Yu, Yi Pan, Lin Zhao, Chong Ma, Xinyu Liu, Wei Liu, Xiang Li, Yixuan Yuan, Dinggang Shen, Dajiang Zhu, Tianming Liu, Xi Jiang

Prompts have been proven to play a crucial role in large language models, and in recent years, vision models have also been using prompts to improve scalability for multiple downstream tasks.

Image Classification

When Brain-inspired AI Meets AGI

no code implementations28 Mar 2023 Lin Zhao, Lu Zhang, Zihao Wu, Yuzhong Chen, Haixing Dai, Xiaowei Yu, Zhengliang Liu, Tuo Zhang, Xintao Hu, Xi Jiang, Xiang Li, Dajiang Zhu, Dinggang Shen, Tianming Liu

Artificial General Intelligence (AGI) has been a long-standing goal of humanity, with the aim of creating machines capable of performing any intellectual task that humans can do.

In-Context Learning

Rectify ViT Shortcut Learning by Visual Saliency

no code implementations17 Jun 2022 Chong Ma, Lin Zhao, Yuzhong Chen, David Weizhong Liu, Xi Jiang, Tuo Zhang, Xintao Hu, Dinggang Shen, Dajiang Zhu, Tianming Liu

In this work, we propose a novel and effective saliency-guided vision transformer (SGT) model to rectify shortcut learning in ViT with the absence of eye-gaze data.

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

no code implementations25 May 2022 Chong Ma, Lin Zhao, Yuzhong Chen, Lu Zhang, Zhenxiang Xiao, Haixing Dai, David Liu, Zihao Wu, Zhengliang Liu, Sheng Wang, Jiaxing Gao, Changhe Li, Xi Jiang, Tuo Zhang, Qian Wang, Dinggang Shen, Dajiang Zhu, Tianming Liu

To address this problem, we propose to infuse human experts' intelligence and domain knowledge into the training of deep neural networks.

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

no code implementations20 May 2022 Yuzhong Chen, Zhenxiang Xiao, Lin Zhao, Lu Zhang, Haixing Dai, David Weizhong Liu, Zihao Wu, Changhe Li, Tuo Zhang, Changying Li, Dajiang Zhu, Tianming Liu, Xi Jiang

However, for data-intensive models such as vision transformer (ViT), current fine-tuning based FSL approaches are inefficient in knowledge generalization and thus degenerate the downstream task performances.

Active Learning Few-Shot Learning

A Unified and Biologically-Plausible Relational Graph Representation of Vision Transformers

no code implementations20 May 2022 Yuzhong Chen, Yu Du, Zhenxiang Xiao, Lin Zhao, Lu Zhang, David Weizhong Liu, Dajiang Zhu, Tuo Zhang, Xintao Hu, Tianming Liu, Xi Jiang

The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.