Search Results for author: Kang Wei

Found 23 papers, 4 papers with code

Dual Expert Distillation Network for Generalized Zero-Shot Learning

no code implementations25 Apr 2024 Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao

Zero-shot learning has consistently yielded remarkable progress via modeling nuanced one-to-one visual-attribute correlation.

Attribute Generalized Zero-Shot Learning

Mobility and Cost Aware Inference Accelerating Algorithm for Edge Intelligence

no code implementations27 Dec 2023 Xin Yuan, Ning li, Kang Wei, Wenchao Xu, Quan Chen, Hao Chen, Song Guo

The model segmentation without user mobility has been investigated deeply by previous works.

Segmentation

Refine, Discriminate and Align: Stealing Encoders via Sample-Wise Prototypes and Multi-Relational Extraction

no code implementations1 Dec 2023 Shuchi Wu, Chuan Ma, Kang Wei, Xiaogang Xu, Ming Ding, Yuwen Qian, Tao Xiang

This paper introduces RDA, a pioneering approach designed to address two primary deficiencies prevalent in previous endeavors aiming at stealing pre-trained encoders: (1) suboptimal performances attributed to biased optimization objectives, and (2) elevated query costs stemming from the end-to-end paradigm that necessitates querying the target encoder every epoch.

Attribute-Aware Representation Rectification for Generalized Zero-Shot Learning

no code implementations23 Nov 2023 Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Qihua Zhou, Jie Zhang, Kang Wei, Chenxin Li, Song Guo

In this paper, we propose a simple yet effective Attribute-Aware Representation Rectification framework for GZSL, dubbed $\mathbf{(AR)^{2}}$, to adaptively rectify the feature extractor to learn novel features while keeping original valuable features.

Attribute Generalized Zero-Shot Learning +1

Federated Meta-Learning for Few-Shot Fault Diagnosis with Representation Encoding

no code implementations13 Oct 2023 Jixuan Cui, Jun Li, Zhen Mei, Kang Wei, Sha Wei, Ming Ding, Wen Chen, Song Guo

However, the domain discrepancy and data scarcity problems among clients deteriorate the performance of the global FL model.

Federated Learning Meta-Learning +1

Analysis and Optimization of Wireless Federated Learning with Data Heterogeneity

no code implementations4 Aug 2023 Xuefeng Han, Jun Li, Wen Chen, Zhen Mei, Kang Wei, Ming Ding, H. Vincent Poor

With the rapid proliferation of smart mobile devices, federated learning (FL) has been widely considered for application in wireless networks for distributed model training.

Federated Learning Scheduling

Towards the Flatter Landscape and Better Generalization in Federated Learning under Client-level Differential Privacy

1 code implementation1 May 2023 Yifan Shi, Kang Wei, Li Shen, Yingqi Liu, Xueqian Wang, Bo Yuan, DaCheng Tao

To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise.

Federated Learning

Gradient Sparsification for Efficient Wireless Federated Learning with Differential Privacy

no code implementations9 Apr 2023 Kang Wei, Jun Li, Chuan Ma, Ming Ding, Feng Shu, Haitao Zhao, Wen Chen, Hongbo Zhu

Specifically, we first design a random sparsification algorithm to retain a fraction of the gradient elements in each client's local training, thereby mitigating the performance degradation induced by DP and and reducing the number of transmission parameters over wireless channels.

Federated Learning Scheduling +1

Design of Two-Level Incentive Mechanisms for Hierarchical Federated Learning

no code implementations9 Apr 2023 Shunfeng Chu, Jun Li, Kang Wei, Yuwen Qian, Kunlun Wang, Feng Shu, Wen Chen

In this paper, we design two-level incentive mechanisms for the HFL with a two-tiered computing structure to encourage the participation of entities in each tier in the HFL training.

Federated Learning Vocal Bursts Valence Prediction

Make Landscape Flatter in Differentially Private Federated Learning

1 code implementation CVPR 2023 Yifan Shi, Yingqi Liu, Kang Wei, Li Shen, Xueqian Wang, DaCheng Tao

Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with better stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance.

Federated Learning

Amplitude-Varying Perturbation for Balancing Privacy and Utility in Federated Learning

no code implementations7 Mar 2023 Xin Yuan, Wei Ni, Ming Ding, Kang Wei, Jun Li, H. Vincent Poor

The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.

Federated Learning Privacy Preserving

Improving the Model Consistency of Decentralized Federated Learning

no code implementations8 Feb 2023 Yifan Shi, Li Shen, Kang Wei, Yan Sun, Bo Yuan, Xueqian Wang, DaCheng Tao

To mitigate the privacy leakages and communication burdens of Federated Learning (FL), decentralized FL (DFL) discards the central server and each client only communicates with its neighbors in a decentralized communication network.

Federated Learning

Vertical Federated Learning: Challenges, Methodologies and Experiments

no code implementations9 Feb 2022 Kang Wei, Jun Li, Chuan Ma, Ming Ding, Sha Wei, Fan Wu, Guihai Chen, Thilina Ranbaduge

As a special architecture in FL, vertical FL (VFL) is capable of constructing a hyper ML model by embracing sub-models from different clients.

Vertical Federated Learning

Low-Latency Federated Learning over Wireless Channels with Differential Privacy

no code implementations20 Jun 2021 Kang Wei, Jun Li, Chuan Ma, Ming Ding, Cailian Chen, Shi Jin, Zhu Han, H. Vincent Poor

Then, we convert the MAMAB to a max-min bipartite matching problem at each communication round, by estimating rewards with the upper confidence bound (UCB) approach.

Federated Learning

Federated Learning with Unreliable Clients: Performance Analysis and Mechanism Design

1 code implementation10 May 2021 Chuan Ma, Jun Li, Ming Ding, Kang Wei, Wen Chen, H. Vincent Poor

Owing to the low communication costs and privacy-promoting capabilities, Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.

Federated Learning

Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization

no code implementations28 Jan 2021 Kang Wei, Jun Li, Ming Ding, Chuan Ma, Yo-Seb Jeon, H. Vincent Poor

An attacker in FL may control a number of participant clients, and purposely craft the uploaded model parameters to manipulate system outputs, namely, model poisoning (MP).

Federated Learning Model Poisoning

Blockchain Assisted Decentralized Federated Learning (BLADE-FL): Performance Analysis and Resource Allocation

no code implementations18 Jan 2021 Jun Li, Yumeng Shao, Kang Wei, Ming Ding, Chuan Ma, Long Shi, Zhu Han, H. Vincent Poor

Focusing on this problem, we explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.

Federated Learning

Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with Lazy Clients

no code implementations2 Dec 2020 Jun Li, Yumeng Shao, Ming Ding, Chuan Ma, Kang Wei, Zhu Han, H. Vincent Poor

The proposed BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.

Federated Learning

RDP-GAN: A Rényi-Differential Privacy based Generative Adversarial Network

1 code implementation4 Jul 2020 Chuan Ma, Jun Li, Ming Ding, Bo Liu, Kang Wei, Jian Weng, H. Vincent Poor

Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.

Generative Adversarial Network

DNN-aided Read-voltage Threshold Optimization for MLC Flash Memory with Finite Block Length

no code implementations11 Apr 2020 Cheng Wang, Kang Wei, Lingjun Kong, Long Shi, Zhen Mei, Jun Li, Kui Cai

The error correcting performance of multi-level-cell (MLC) NAND flash memory is closely related to the block length of error correcting codes (ECCs) and log-likelihood-ratios (LLRs) of the read-voltage thresholds.

User-Level Privacy-Preserving Federated Learning: Analysis and Performance Optimization

no code implementations29 Feb 2020 Kang Wei, Jun Li, Ming Ding, Chuan Ma, Hang Su, Bo Zhang, H. Vincent Poor

According to our analysis, the UDP framework can realize $(\epsilon_{i}, \delta_{i})$-LDP for the $i$-th MT with adjustable privacy protection levels by varying the variances of the artificial noise processes.

Federated Learning Privacy Preserving

Federated Learning with Differential Privacy: Algorithms and Performance Analysis

no code implementations1 Nov 2019 Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farokhi Farhad, Shi Jin, Tony Q. S. Quek, H. Vincent Poor

Specifically, the theoretical bound reveals the following three key properties: 1) There is a tradeoff between the convergence performance and privacy protection levels, i. e., a better convergence performance leads to a lower protection level; 2) Given a fixed privacy protection level, increasing the number $N$ of overall clients participating in FL can improve the convergence performance; 3) There is an optimal number of maximum aggregation times (communication rounds) in terms of convergence performance for a given protection level.

Federated Learning Privacy Preserving +1

Cannot find the paper you are looking for? You can Submit a new open access paper.