no code implementations • 29 Dec 2023 • Jie Shen, Shusen Yang, Cong Zhao, Xuebin Ren, Peng Zhao, Yuqian Yang, Qing Han, Shuaijun Wu
Intelligent equipment fault diagnosis based on Federated Transfer Learning (FTL) attracts considerable attention from both academia and industry.
1 code implementation • 14 Dec 2023 • Guiqin Wang, Peng Zhao, Yanjiang Shi, Cong Zhao, Shusen Yang
Addressing this gap, our paper introduces an innovative knowledge distillation framework, with the generative model for training a lightweight student model.
no code implementations • ICCV 2023 • Guiqin Wang, Peng Zhao, Cong Zhao, Shusen Yang, Jie Cheng, Luziwei Leng, Jianxing Liao, Qinghai Guo
To address this problem, we propose a novel attention-based hierarchically-structured latent model to learn the temporal variations of feature semantics.
no code implementations • 16 Jan 2023 • Qiong Wu, Xu Chen, Tao Ouyang, Zhi Zhou, Xiaoxi Zhang, Shusen Yang, Junshan Zhang
Federated learning (FL) is a promising paradigm that enables collaboratively learning a shared model across massive clients while keeping the training data locally.
1 code implementation • 24 Mar 2022 • Luhui Wang, Cong Zhao, Shusen Yang, Xinyu Yang, Julie McCann
Intelligent applications based on machine learning are impacting many parts of our lives.
no code implementations • 2 Mar 2022 • ZiHao Zhou, Yanan Li, Xuebin Ren, Shusen Yang
Federated learning (FL) is an emerging privacy-preserving paradigm that enables multiple participants collaboratively to train a global model without uploading raw data.
no code implementations • 29 Jan 2021 • Songbo Hou, Shusen Yang
Let $\lambda(t)$ be the first eigenvalue of $-\Delta+aR\, (a>0)$ under the backward Ricci flow on locally homogeneous 3-manifolds, where $R$ is the scalar curvature.
Differential Geometry 53E20, 58C40
no code implementations • 9 Oct 2020 • Fangyuan Zhao, Xuebin Ren, Shusen Yang, Qing Han, Peng Zhao, Xinyu Yang
To address the privacy issue in LDA, we systematically investigate the privacy protection of the main-stream LDA training algorithm based on Collapsed Gibbs Sampling (CGS) and propose several differentially private LDA algorithms for typical training scenarios.
no code implementations • 4 May 2020 • Yuanrui Dong, Peng Zhao, Hanqiao Yu, Cong Zhao, Shusen Yang
The emerging edge-cloud collaborative Deep Learning (DL) paradigm aims at improving the performance of practical DL implementations in terms of cloud bandwidth consumption, response latency, and data privacy preservation.
no code implementations • 22 Apr 2020 • Qing Han, Shusen Yang, Xuebin Ren, Cong Zhao, Jingqi Zhang, Xinyu Yang
However, heterogeneous and limited computation and communication resources on edge servers (or edges) pose great challenges on distributed ML and formulate a new paradigm of Edge Learning (i. e. edge-cloud collaborative machine learning).
no code implementations • 17 Dec 2019 • Yanan Li, Shusen Yang, Xuebin Ren, Cong Zhao
Formally, we give the first analysis on the model convergence of AFL under DP and propose a multi-stage adjustable private algorithm (MAPA) to improve the trade-off between model utility and privacy by dynamically adjusting both the noise scale and the learning rate.
no code implementations • 5 Jun 2019 • Yanan Li, Xuebin Ren, Shusen Yang, Xinyu Yang
Considering general correlations, a closed-form expression of privacy leakage is derived for continuous data, and a chain rule is presented for discrete data.
no code implementations • 4 Jun 2019 • Fangyuan Zhao, Xuebin Ren, Shusen Yang, Xinyu Yang
Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for discovery of hidden semantic architecture of text datasets, and plays a fundamental role in many machine learning applications.