1 code implementation • 26 Sep 2023 • Shih-Ying Yeh, Yu-Guan Hsieh, Zhidong Gao, Bernard B W Yang, Giyeong Oh, Yanmin Gong
Text-to-image generative models have garnered immense attention for their ability to produce high-fidelity images from text prompts.
no code implementations • 15 Apr 2023 • Zhenxiao Zhang, Yuanxiong Guo, Yuguang Fang, Yanmin Gong
In this paper, we propose a novel wireless FL scheme called private federated edge learning with sparsification (PFELS) to provide client-level DP guarantee with intrinsic channel noise while reducing communication and energy overhead and improving model accuracy.
no code implementations • ICCV 2023 • Rui Chen, Qiyu Wan, Pavana Prakash, Lan Zhang, Xu Yuan, Yanmin Gong, Xin Fu, Miao Pan
However, practical deployment of FL over mobile devices is very challenging because (i) conventional FL incurs huge training latency for mobile devices due to interleaved local computing and communications of model updates, (ii) there are heterogeneous training data across mobile devices, and (iii) mobile devices have hardware heterogeneity in terms of computing and communication capabilities.
no code implementations • 25 May 2022 • Zhenxiao Zhang, Zhidong Gao, Yuanxiong Guo, Yanmin Gong
On the other hand, the edge-based FL framework that relies on an edge server co-located with mobile base station for model aggregation has low communication latency but suffers from degraded model accuracy due to the limited coverage of edge server.
no code implementations • 15 Feb 2022 • Rui Hu, Yanmin Gong, Yuanxiong Guo
Federated learning (FL) that enables edge devices to collaboratively learn a shared model while keeping their training data locally has received great attention recently and can protect privacy in comparison with the traditional centralized learning paradigm.
no code implementations • ICLR 2022 • Yuanxiong Guo, Ying Sun, Rui Hu, Yanmin Gong
Communication is a key bottleneck in federated learning where a large number of edge devices collaboratively learn a model under the orchestration of a central server without sharing their own training data.
no code implementations • 12 Sep 2020 • Zhidong Gao, Rui Hu, Yanmin Gong
Graph classification has practical applications in diverse fields.
no code implementations • 11 Sep 2020 • Rui Hu, Yanmin Gong
Federated Learning rests on the notion of training a global model distributedly on various devices.
no code implementations • 1 Aug 2020 • Rui Hu, Yanmin Gong, Yuanxiong Guo
Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost.
no code implementations • 16 May 2020 • Zonghao Huang, Yanmin Gong
Alternating Direction Method of Multipliers (ADMM) is a popular algorithm for distributed learning, where a network of nodes collaboratively solve a regularized empirical risk minimization by iterative local computation associated with distributed data and iterate exchanges.
no code implementations • 30 Mar 2020 • Rui Hu, Yuanxiong Guo, Yanmin Gong
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server without sharing their local data.
no code implementations • 28 Mar 2020 • Rui Hu, Yuanxiong Guo, E. Paul. Ratazzi, Yanmin Gong
With the proliferation of smart devices having built-in sensors, Internet connectivity, and programmable computation capability in the era of Internet of things (IoT), tremendous data is being generated at the network edge.
no code implementations • 7 Jan 2019 • Jiahao Ding, Xiaoqi Qin, Wenjun Xu, Yanmin Gong, Chi Zhang, Miao Pan
Due to massive amounts of data distributed across multiple locations, distributed machine learning has attracted a lot of research interests.
no code implementations • 30 Aug 2018 • Zonghao Huang, Rui Hu, Yuanxiong Guo, Eric Chan-Tin, Yanmin Gong
The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning.