no code implementations • 6 May 2024 • Xiaoxue Yu, Xingfu Yi, Rongpeng Li, Fei Wang, Chenghui Peng, Zhifeng Zhao, Honggang Zhang
Existing distributed learning frameworks like Federated Learning and Split Learning often struggle with significant challenges in dynamic network environments including high synchronization demands, costly communication overheads, severe computing resource consumption, and data heterogeneity across network nodes.
no code implementations • 12 Jul 2023 • Yuxuan Chen, Rongpeng Li, Zhifeng Zhao, Chenghui Peng, Jianjun Wu, Ekram Hossain, Honggang Zhang
Towards personalized generative services, a collaborative cloud-edge methodology is promising, as it facilitates the effective orchestration of heterogeneous distributed communication and computing resources.
no code implementations • 1 Jun 2023 • Xingfu Yi, Rongpeng Li, Chenghui Peng, Fei Wang, Jianjun Wu, Zhifeng Zhao
The rapid development of artificial intelligence (AI) over massive applications including Internet-of-things on cellular network raises the concern of technical challenges such as privacy, heterogeneity and resource efficiency.
no code implementations • 5 May 2023 • Yuchen Shi, Zheqi Zhu, Pingyi Fan, Khaled B. Letaief, Chenghui Peng
Federated Learning (FL) is a promising distributed learning mechanism which still faces two major challenges, namely privacy breaches and system efficiency.
1 code implementation • 11 Mar 2023 • Zheqi Zhu, Yuchen Shi, Jiajun Luo, Fei Wang, Chenghui Peng, Pingyi Fan, Khaled B. Letaief
By adopting layer-wise pruning in local training and federated updating, we formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models.
1 code implementation • 5 Oct 2022 • Zheqi Zhu, Yuchen Shi, Pingyi Fan, Chenghui Peng, Khaled B. Letaief
Then, we formulate the problem of selecting optimal IS weights and obtain the theoretical solutions.
no code implementations • 2 Dec 2021 • Shuo Wan, Jiaxun Lu, Pingyi Fan, Yunfeng Shao, Chenghui Peng, Khaled B. Letaief
In this paper, we develop a vertical-horizontal federated learning (VHFL) process, where the global feature is shared with the agents in a procedure similar to that of vertical FL without any extra communication rounds.
no code implementations • 20 Aug 2021 • Qingyang Zhou, Rongpeng Li, Zhifeng Zhao, Chenghui Peng, Honggang Zhang
With the development of deep learning (DL), natural language processing (NLP) makes it possible for us to analyze and understand a large amount of language texts.
no code implementations • 30 Apr 2021 • Shuo Wan, Jiaxun Lu, Pingyi Fan, Yunfeng Shao, Chenghui Peng, Khaled B. Letaief
Federated learning (FL) has recently emerged as an important and promising learning scheme in IoT, enabling devices to jointly learn a model without sharing their raw data sets.