no code implementations • 13 Mar 2024 • Xinjie Zhang, Shenyuan Gao, Zhening Liu, Jiawei Shao, Xingtong Ge, Dailan He, Tongda Xu, Yan Wang, Jun Zhang
Existing learning-based stereo image codec adopt sophisticated transformation with simple entropy models derived from single image codecs to encode latent representations.
no code implementations • 30 Aug 2023 • Zijian Li, Zehong Lin, Jiawei Shao, Yuyi Mao, Jun Zhang
However, devices often have non-independent and identically distributed (non-IID) data, meaning their local data distributions can vary significantly.
no code implementations • 9 Aug 2023 • Zijian Li, Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary, with which the synthetic data not only improve the model generalization but also erase the information of real features.
no code implementations • 20 Jul 2023 • Jiawei Shao, Zijian Li, Wenqiang Sun, Tailin Zhou, Yuchang Sun, Lumin Liu, Zehong Lin, Yuyi Mao, Jun Zhang
Without data centralization, FL allows clients to share local information in a privacy-preserving manner.
no code implementations • 6 Jul 2023 • Yifei Shen, Jiawei Shao, Xinjie Zhang, Zehong Lin, Hao Pan, Dongsheng Li, Jun Zhang, Khaled B. Letaief
The evolution of wireless networks gravitates towards connected intelligence, a concept that envisions seamless interconnectivity among humans, objects, and intelligence in a hyper-connected cyber-physical world.
1 code implementation • 21 May 2023 • Hongru Li, Wentao Yu, Hengtao He, Jiawei Shao, Shenghui Song, Jun Zhang, Khaled B. Letaief
Task-oriented communication is an emerging paradigm for next-generation communication networks, which extracts and transmits task-relevant information, instead of raw data, for downstream applications.
1 code implementation • 4 Apr 2023 • Jiawei Shao, Fangzhao Wu, Jun Zhang
While federated learning is promising for privacy-preserving collaborative learning without revealing local data, it remains vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
1 code implementation • 21 Mar 2023 • Xinjie Zhang, Jiawei Shao, Jun Zhang
This has inspired a distributed coding architecture aiming at reducing the encoding complexity.
no code implementations • 24 Feb 2023 • Xuefeng Wang, Xinran Li, Jiawei Shao, Jun Zhang
Learning communication strategies in cooperative multi-agent reinforcement learning (MARL) has recently attracted intensive attention.
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • 24 Jan 2023 • Xinjie Zhang, Jiawei Shao, Jun Zhang
Multi-view image compression plays a critical role in 3D-related applications.
1 code implementation • 25 Nov 2022 • Jiawei Shao, Xinjie Zhang, Jun Zhang
With the development of artificial intelligence (AI) techniques and the increasing popularity of camera-equipped devices, many edge video analytics applications are emerging, calling for the deployment of computation-intensive AI models at the network edge.
no code implementations • 8 Nov 2022 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Songze Li, Jun Zhang
During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices.
no code implementations • 6 Oct 2022 • Jiawei Shao, Yuchang Sun, Songze Li, Jun Zhang
Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.
no code implementations • 11 Jun 2022 • Zijian Li, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
A combination of the local private dataset and synthetic dataset with confident pseudo labels leads to nearly identical data distributions among clients, which improves the consistency among local models and benefits the global aggregation.
no code implementations • 25 Jan 2022 • Yuchang Sun, Jiawei Shao, Songze Li, Yuyi Mao, Jun Zhang
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework, where many clients collaboratively train a machine learning model by exchanging model updates with a parameter server instead of sharing their raw data.
no code implementations • 20 Dec 2021 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
By exploiting the low-latency communication among edge servers for efficient model sharing, SD-FEEL can incorporate more training data, while enjoying much lower latency compared with conventional federated learning.
no code implementations • 9 Dec 2021 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Jun Zhang
Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks.
2 code implementations • 1 Sep 2021 • Jiawei Shao, Yuyi Mao, Jun Zhang
To enable low-latency cooperative inference, we propose a learning-based communication scheme that optimizes local feature extraction and distributed feature encoding in a task-oriented manner, i. e., to remove data redundancy and transmit information that is essential for the downstream inference task rather than reconstructing the data samples at the edge server.
no code implementations • 30 Aug 2021 • Xinjie Zhang, Jiawei Shao, Yuyi Mao, Jun Zhang
Device-edge co-inference, which partitions a deep neural network between a resource-constrained mobile device and an edge server, recently emerges as a promising paradigm to support intelligent mobile applications.
no code implementations • 26 Apr 2021 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine learning solutions, while preserving data privacy.
1 code implementation • 8 Feb 2021 • Jiawei Shao, Yuyi Mao, Jun Zhang
Extensive experiments evidence that the proposed task-oriented communication system achieves a better rate-distortion tradeoff than baseline methods and significantly reduces the feature transmission latency in dynamic channel conditions.
1 code implementation • 27 Oct 2020 • Jiawei Shao, Haowei Zhang, Yuyi Mao, Jun Zhang
The recent advancements of three-dimensional (3D) data acquisition devices have spurred a new breed of applications that rely on point cloud data processing.
Distributed, Parallel, and Cluster Computing
1 code implementation • 3 Jun 2020 • Jiawei Shao, Jun Zhang
The recent breakthrough in artificial intelligence (AI), especially deep neural networks (DNNs), has affected every branch of science and technology.
1 code implementation • 31 Oct 2019 • Jiawei Shao, Jun Zhang
By exploiting the strong sparsity and the fault-tolerant property of the intermediate feature in a deep neural network (DNN), BottleNet++ achieves a much higher compression ratio than existing methods.