no code implementations • 28 Mar 2024 • Xinyu Bian, Yuhao Liu, Yizhou Xu, Tianqi Hou, Wenjie Wang, Yuyi Mao, Jun Zhang
Simulation results demonstrate the effectiveness of our proposed decentralized precoding scheme, which achieves performance similar to the optimal centralized precoding scheme.
no code implementations • 15 Mar 2024 • Yuhao Liu, Xinyu Bian, Yizhou Xu, Tianqi Hou, Wenjie Wang, Yuyi Mao, Jun Zhang
In order to control the inter-cell interference for a multi-cell multi-user multiple-input multiple-output network, we consider the precoder design for coordinated multi-point with downlink coherent joint transmission.
no code implementations • 28 Feb 2024 • Xinyu Bian, Yuyi Mao, Jun Zhang
Grant-free random access (RA) has been recognized as a promising solution to support massive connectivity due to the removal of the uplink grant request procedures.
no code implementations • 15 Feb 2024 • Wenhao Zhuang, Yuyi Mao, Hengtao He, Lei Xie, Shenghui Song, Yao Ge, Zhi Ding
Orthogonal time frequency space (OTFS) modulation has emerged as a promising solution to support high-mobility wireless communications, for which, cost-effective data detectors are critical.
no code implementations • 1 Dec 2023 • Yuyi Mao, Xianghao Yu, Kaibin Huang, Ying-Jun Angela Zhang, Jun Zhang
Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference.
no code implementations • 25 Oct 2023 • Linping Qu, Shenghui Song, Chi-Ying Tsui, Yuyi Mao
It is also shown that the uplink communication in FL can tolerate a higher bit error rate (BER) than downlink communication, and this difference is quantified by a proposed formula.
no code implementations • 30 Aug 2023 • Zijian Li, Zehong Lin, Jiawei Shao, Yuyi Mao, Jun Zhang
However, devices often have non-independent and identically distributed (non-IID) data, meaning their local data distributions can vary significantly.
no code implementations • 9 Aug 2023 • Zijian Li, Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary, with which the synthetic data not only improve the model generalization but also erase the information of real features.
no code implementations • 20 Jul 2023 • Jiawei Shao, Zijian Li, Wenqiang Sun, Tailin Zhou, Yuchang Sun, Lumin Liu, Zehong Lin, Yuyi Mao, Jun Zhang
Without data centralization, FL allows clients to share local information in a privacy-preserving manner.
1 code implementation • 21 Jun 2023 • Yuchang Sun, Yuyi Mao, Jun Zhang
Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the model updates need to be collected at a server.
no code implementations • 26 May 2023 • Yuchang Sun, Zehong Lin, Yuyi Mao, Shi Jin, Jun Zhang
In this paper, we propose a probabilistic device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise, where each device is scheduled according to a certain probability and its model update is reweighted using this probability in aggregation.
no code implementations • 21 May 2023 • Xinyu Bian, Yuyi Mao, Jun Zhang
Most existing studies on joint activity detection and channel estimation for grant-free massive random access (RA) systems assume perfect synchronization among all active users, which is hard to achieve in practice.
no code implementations • 12 Apr 2023 • Xinyu Bian, Yuyi Mao, Jun Zhang
Specifically, by jointly leveraging the user activity correlation between adjacent transmission blocks and the historical channel estimation results, we first develop an activity-correlation-aware receiver for grant-free massive RA systems with retransmission based on the correlated approximate message passing (AMP) algorithm.
no code implementations • 8 Nov 2022 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Songze Li, Jun Zhang
During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices.
no code implementations • 15 Jun 2022 • Rongkang Dong, Yuyi Mao, Jun Zhang
In this paper, we propose an early exit prediction mechanism to reduce the on-device computation overhead in a device-edge co-inference system supported by early-exit networks.
no code implementations • 11 Jun 2022 • Zijian Li, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
A combination of the local private dataset and synthetic dataset with confident pseudo labels leads to nearly identical data distributions among clients, which improves the consistency among local models and benefits the global aggregation.
no code implementations • 25 Jan 2022 • Yuchang Sun, Jiawei Shao, Songze Li, Yuyi Mao, Jun Zhang
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework, where many clients collaboratively train a machine learning model by exchanging model updates with a parameter server instead of sharing their raw data.
no code implementations • 20 Dec 2021 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
By exploiting the low-latency communication among edge servers for efficient model sharing, SD-FEEL can incorporate more training data, while enjoying much lower latency compared with conventional federated learning.
no code implementations • 9 Dec 2021 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Jun Zhang
Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks.
2 code implementations • 1 Sep 2021 • Jiawei Shao, Yuyi Mao, Jun Zhang
To enable low-latency cooperative inference, we propose a learning-based communication scheme that optimizes local feature extraction and distributed feature encoding in a task-oriented manner, i. e., to remove data redundancy and transmit information that is essential for the downstream inference task rather than reconstructing the data samples at the edge server.
no code implementations • 30 Aug 2021 • Xinjie Zhang, Jiawei Shao, Yuyi Mao, Jun Zhang
Device-edge co-inference, which partitions a deep neural network between a resource-constrained mobile device and an edge server, recently emerges as a promising paradigm to support intelligent mobile applications.
no code implementations • 12 Jul 2021 • Xinyu Bian, Yuyi Mao, Jun Zhang
In particular, the common sparsity pattern in the received pilot and data signal has been ignored in most existing studies, and auxiliary information of channel decoding has not been utilized for user activity detection.
no code implementations • 26 Apr 2021 • Xinyu Bian, Yuyi Mao, Jun Zhang
In this paper, we propose a turbo receiver for joint activity detection and data decoding in grant-free massive random access, which iterates between a detector and a belief propagation (BP)-based channel decoder.
no code implementations • 26 Apr 2021 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine learning solutions, while preserving data privacy.
no code implementations • 17 Feb 2021 • Xinyu Bian, Yuyi Mao, Jun Zhang
Massive machine-type communication (mMTC) has been regarded as one of the most important use scenarios in the fifth generation (5G) and beyond wireless networks, which demands scalable access for a large number of devices.
1 code implementation • 8 Feb 2021 • Jiawei Shao, Yuyi Mao, Jun Zhang
Extensive experiments evidence that the proposed task-oriented communication system achieves a better rate-distortion tradeoff than baseline methods and significantly reduces the feature transmission latency in dynamic channel conditions.
1 code implementation • 27 Oct 2020 • Jiawei Shao, Haowei Zhang, Yuyi Mao, Jun Zhang
The recent advancements of three-dimensional (3D) data acquisition devices have spurred a new breed of applications that rely on point cloud data processing.
Distributed, Parallel, and Cluster Computing
no code implementations • 18 May 2016 • Yuyi Mao, Jun Zhang, Khaled B. Letaief
Sample simulation results shall be presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Information Theory Information Theory