Search Results for author: Yanmin Gong

Found 14 papers, 1 papers with code

Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation

1 code implementation26 Sep 2023 Shih-Ying Yeh, Yu-Guan Hsieh, Zhidong Gao, Bernard B W Yang, Giyeong Oh, Yanmin Gong

Text-to-image generative models have garnered immense attention for their ability to produce high-fidelity images from text prompts.

Communication and Energy Efficient Wireless Federated Learning with Intrinsic Privacy

no code implementations15 Apr 2023 Zhenxiao Zhang, Yuanxiong Guo, Yuguang Fang, Yanmin Gong

In this paper, we propose a novel wireless FL scheme called private federated edge learning with sparsification (PFELS) to provide client-level DP guarantee with intrinsic channel noise while reducing communication and energy overhead and improving model accuracy.

Federated Learning

Workie-Talkie: Accelerating Federated Learning by Overlapping Computing and Communications via Contrastive Regularization

no code implementations ICCV 2023 Rui Chen, Qiyu Wan, Pavana Prakash, Lan Zhang, Xu Yuan, Yanmin Gong, Xin Fu, Miao Pan

However, practical deployment of FL over mobile devices is very challenging because (i) conventional FL incurs huge training latency for mobile devices due to interleaved local computing and communications of model updates, (ii) there are heterogeneous training data across mobile devices, and (iii) mobile devices have hardware heterogeneity in terms of computing and communication capabilities.

Federated Learning

Scalable and Low-Latency Federated Learning with Cooperative Mobile Edge Networking

no code implementations25 May 2022 Zhenxiao Zhang, Zhidong Gao, Yuanxiong Guo, Yanmin Gong

On the other hand, the edge-based FL framework that relies on an edge server co-located with mobile base station for model aggregation has low communication latency but suffers from degraded model accuracy due to the limited coverage of edge server.

Federated Learning

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

no code implementations15 Feb 2022 Rui Hu, Yanmin Gong, Yuanxiong Guo

Federated learning (FL) that enables edge devices to collaboratively learn a shared model while keeping their training data locally has received great attention recently and can protect privacy in comparison with the traditional centralized learning paradigm.

Federated Learning

Hybrid Local SGD for Federated Learning with Heterogeneous Communications

no code implementations ICLR 2022 Yuanxiong Guo, Ying Sun, Rui Hu, Yanmin Gong

Communication is a key bottleneck in federated learning where a large number of edge devices collaboratively learn a model under the orchestration of a central server without sharing their own training data.

Federated Learning

Trading Data For Learning: Incentive Mechanism For On-Device Federated Learning

no code implementations11 Sep 2020 Rui Hu, Yanmin Gong

Federated Learning rests on the notion of training a global model distributedly on various devices.

Federated Learning

Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization

no code implementations1 Aug 2020 Rui Hu, Yanmin Gong, Yuanxiong Guo

Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost.

Federated Learning

Differentially Private ADMM for Convex Distributed Learning: Improved Accuracy via Multi-Step Approximation

no code implementations16 May 2020 Zonghao Huang, Yanmin Gong

Alternating Direction Method of Multipliers (ADMM) is a popular algorithm for distributed learning, where a network of nodes collaboratively solve a regularized empirical risk minimization by iterative local computation associated with distributed data and iterate exchanges.

Concentrated Differentially Private and Utility Preserving Federated Learning

no code implementations30 Mar 2020 Rui Hu, Yuanxiong Guo, Yanmin Gong

Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server without sharing their local data.

Federated Learning Privacy Preserving

Differentially Private Federated Learning for Resource-Constrained Internet of Things

no code implementations28 Mar 2020 Rui Hu, Yuanxiong Guo, E. Paul. Ratazzi, Yanmin Gong

With the proliferation of smart devices having built-in sensors, Internet connectivity, and programmable computation capability in the era of Internet of things (IoT), tremendous data is being generated at the network edge.

Federated Learning

Differentially Private ADMM for Distributed Medical Machine Learning

no code implementations7 Jan 2019 Jiahao Ding, Xiaoqi Qin, Wenjun Xu, Yanmin Gong, Chi Zhang, Miao Pan

Due to massive amounts of data distributed across multiple locations, distributed machine learning has attracted a lot of research interests.

BIG-bench Machine Learning

DP-ADMM: ADMM-based Distributed Learning with Differential Privacy

no code implementations30 Aug 2018 Zonghao Huang, Rui Hu, Yuanxiong Guo, Eric Chan-Tin, Yanmin Gong

The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.