no code implementations • 21 Mar 2024 • Hong Huang, Weiming Zhuang, Chen Chen, Lingjuan Lyu
To address these challenges, we propose FedMef, a novel and memory-efficient federated dynamic pruning framework.
1 code implementation • ICCV 2023 • Weiming Zhuang, Yonggang Wen, Lingjuan Lyu, Shuai Zhang
Then, we present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks.
no code implementations • 11 Jul 2023 • Sikai Bai, Shuaicheng Li, Weiming Zhuang, Jie Zhang, Song Guo, Kunlin Yang, Jun Hou, Shuai Zhang, Junyu Gao, Shuai Yi
Theoretically, we show the convergence guarantee of the dual regulators.
no code implementations • 27 Jun 2023 • Weiming Zhuang, Chen Chen, Lingjuan Lyu
The intersection of the Foundation Model (FM) and Federated Learning (FL) provides mutual benefits, presents a unique opportunity to unlock new possibilities in AI research, and address critical challenges in AI and real-world applications.
no code implementations • 9 Jun 2023 • Weiming Zhuang, Lingjuan Lyu
Federated learning (FL) enhances data privacy with collaborative in-situ training on decentralized clients.
1 code implementation • ICCV 2023 • Jie Zhang, Chen Chen, Weiming Zhuang, LingJuan Lv
This paper focuses on an under-explored yet important problem: Federated Class-Continual Learning (FCCL), where new classes are dynamically added in federated learning.
no code implementations • 9 Jul 2022 • Weiming Zhuang, Yonggang Wen, Shuai Zhang
In this work, we propose a smart multi-tenant FL system, MuFL, to effectively coordinate and execute simultaneous training activities.
no code implementations • 3 Jul 2022 • Weiming Zhuang, Chongjie Ye, Ying Xu, Pengzhi Mao, Shuai Zhang
In this demo, we present Chat-to-Design, a new multimodal interaction system for personalized fashion design.
2 code implementations • 24 May 2022 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang
Based on these insights, we propose three optimization approaches: (1) We adopt knowledge distillation to facilitate the convergence of FedReID by better transferring knowledge from clients to the server; (2) We introduce client clustering to improve the performance of large datasets by aggregating clients with similar data distributions; (3) We propose cosine distance weight to elevate performance by dynamically updating the weights for aggregation depending on how well models are trained in clients.
no code implementations • 9 Apr 2022 • Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi
To address this problem, we propose federated unsupervised domain adaptation for face recognition, FedFR.
1 code implementation • ICLR 2022 • Weiming Zhuang, Yonggang Wen, Shuai Zhang
Using the framework, our study uncovers unique insights of FedSSL: 1) stop-gradient operation, previously reported to be essential, is not always necessary in FedSSL; 2) retaining local knowledge of clients in FedSSL is particularly beneficial for non-IID data.
1 code implementation • ICCV 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang, Shuai Yi
In this framework, each party trains models from unlabeled data independently using contrastive learning with an online network and a target network.
1 code implementation • 14 Aug 2021 • Weiming Zhuang, Yonggang Wen, Shuai Zhang
We present FedUReID, a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy.
no code implementations • 17 May 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi
To this end, FedFR forms an end-to-end training pipeline: (1) pre-train in the source domain; (2) predict pseudo labels by clustering in the target domain; (3) conduct domain-constrained federated learning across two domains.
1 code implementation • 17 May 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang
However, these platforms are complex to use and require a deep understanding of FL, which imposes high barriers to entry for beginners, limits the productivity of researchers, and compromises deployment efficiency.
2 code implementations • 26 Aug 2020 • Weiming Zhuang, Yonggang Wen, Xuesen Zhang, Xin Gan, Daiying Yin, Dongzhan Zhou, Shuai Zhang, Shuai Yi
Then we propose two optimization methods: (1) To address the unbalanced weight problem, we propose a new method to dynamically change the weights according to the scale of model changes in clients in each training round; (2) To facilitate convergence, we adopt knowledge distillation to refine the server model with knowledge generated from client models on a public dataset.