no code implementations • 16 Apr 2024 • Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Bo Gao, Tianliu He, Wen Wang
On-device intelligence (ODI) enables artificial intelligence (AI) applications to run on end devices, providing real-time and customized AI inference without relying on remote servers.
no code implementations • 13 Mar 2024 • Sicen Guo, Zhiyuan Wu, Qijun Chen, Ioannis Pitas, Rui Fan
We introduce the Learning to Infuse "X" (LIX) framework, with novel contributions in both logit distillation and feature distillation aspects.
no code implementations • 21 Jan 2024 • Zhiyuan Wu, Yi Feng, Chuang-Wei Liu, Fisher Yu, Qijun Chen, Rui Fan
Hence, in this article, we introduce S$^3$M-Net, a novel joint learning framework developed to perform semantic segmentation and stereo matching simultaneously.
no code implementations • 8 Jan 2024 • Yuhan Tang, Zhiyuan Wu, Bo Gao, Tian Wen, Yuwei Wang, Sheng Sun
Federated Distillation (FD) is a novel and promising distributed machine learning paradigm, where knowledge distillation is leveraged to facilitate a more efficient and flexible cross-device knowledge transfer in federated learning.
2 code implementations • 1 Jan 2024 • Zhiyuan Wu, Tianliu He, Sheng Sun, Yuwei Wang, Min Liu, Bo Gao, Xuefeng Jiang
Federated Learning (FL) enables collaborative model training among participants while guaranteeing the privacy of raw data.
1 code implementation • 7 Dec 2023 • Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Tian Wen, Wen Wang
ALU drastically decreases the frequency of communication in federated distillation, thereby significantly reducing the communication overhead during the training process.
1 code implementation • 1 Dec 2023 • Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Bo Gao, Quyang Pan, Tianliu He, Xuefeng Jiang
Federated Learning (FL) enables training Artificial Intelligence (AI) models over end devices without compromising their privacy.
no code implementations • 14 Nov 2023 • Yuwei Wang, Runhan Li, Hao Tan, Xuefeng Jiang, Sheng Sun, Min Liu, Bo Gao, Zhiyuan Wu
By fusing the logits of the two models, the private weak learner can capture the variance of different data, regardless of their category.
1 code implementation • 14 Jan 2023 • Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Xuefeng Jiang, Runhan Li, Bo Gao
The increasing demand for intelligent services and privacy protection of mobile and Internet of Things (IoT) devices motivates the wide application of Federated Edge Learning (FEL), in which devices collaboratively train on-device Machine Learning (ML) models without sharing their private data.
1 code implementation • 1 Jan 2023 • Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Quyang Pan, Xuefeng Jiang, Bo Gao
Federated Multi-task Learning (FMTL) is proposed to train related but personalized ML models for different devices, whereas previous works suffer from excessive communication overhead during training and neglect the model heterogeneity among devices in MEC.
no code implementations • 3 Sep 2022 • Shuanglong Yao, Dechang Pi, Junfu Chen, Yufei Liu, Zhiyuan Wu
The task of link prediction aims to solve the problem of incomplete knowledge caused by the difficulty of collecting facts from the real world.
2 code implementations • 14 Apr 2022 • Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Quyang Pan, Junbo Zhang, Zeju Li, Qingxiang Liu
Federated distillation (FD) is proposed to simultaneously address the above two problems, which exchanges knowledge between the server and clients, supporting heterogeneous local models while significantly reducing communication overhead.
1 code implementation • 6 May 2021 • Zizhen Zhang, Zhiyuan Wu, Hang Zhang, Jiahai Wang
When these problems are extended to multiobjective ones, it becomes difficult for the existing DRL approaches to flexibly and efficiently deal with multiple subproblems determined by weight decomposition of objectives.
no code implementations • 29 Apr 2021 • Zhiyuan Wu, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi
To further improve the robustness of the student, we extend SD to Enhanced Spirit Distillation (ESD) in exploiting a more comprehensive knowledge by introducing the proximity domain which is similar to the target domain for feature extraction.
no code implementations • 25 Mar 2021 • Zhiyuan Wu, Yu Jiang, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi
Inspired by the ideas of Fine-tuning-based Transfer Learning (FTT) and feature-based knowledge distillation, we propose a new knowledge distillation method for cross-domain knowledge transference and efficient data-insufficient network training, named Spirit Distillation(SD), which allow the student network to mimic the teacher network to extract general features, so that a compact and accurate student network can be trained for real-time semantic segmentation of road scenes.
no code implementations • 26 Oct 2020 • Zhiyuan Wu, Hong Qi, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue
Model compression becomes a recent trend due to the requirement of deploying neural networks on embedded and mobile devices.