1 code implementation • 9 Feb 2024 • Gongxi Zhu, Donghao Li, Hanlin Gu, Yuxing Han, Yuan YAO, Lixin Fan, Qiang Yang
Firstly, combining model information from multiple communication rounds (Multi-temporal) enhances the overall effectiveness of MIAs compared to utilizing model information from a single epoch.
no code implementations • 27 Dec 2023 • Hanlin Gu, Xinyuan Zhao, Gongxi Zhu, Yuxing Han, Yan Kang, Lixin Fan, Qiang Yang
Concerns about utility, privacy, and training efficiency in FL have garnered significant research attention.
no code implementations • 29 Nov 2023 • Yan Kang, Tao Fan, Hanlin Gu, Xiaojin Zhang, Lixin Fan, Qiang Yang
Motivated by the strong growth in FTL-FM research and the potential impact of FTL-FM on industrial applications, we propose an FTL-FM framework that formulates problems of grounding FMs in the federated learning setting, construct a detailed taxonomy based on the FTL-FM framework to categorize state-of-the-art FTL-FM works, and comprehensively overview FTL-FM works based on the proposed taxonomy.
no code implementations • 24 Oct 2023 • Yuanfeng Song, Yuanqin He, Xuefang Zhao, Hanlin Gu, Di Jiang, Haijun Yang, Lixin Fan, Qiang Yang
The springing up of Large Language Models (LLMs) has shifted the community from single-task-orientated natural language processing (NLP) research to a holistic end-to-end multi-task learning paradigm.
no code implementations • 13 Jun 2023 • Bowen Li, Hanlin Gu, Ruoxin Chen, Jie Li, Chentao Wu, Na Ruan, Xueming Si, Lixin Fan
We investigate a Temporal Gradient Inversion Attack with a Robust Optimization framework, called TGIAs-RO, which recovers private data without any prior knowledge by leveraging multiple temporal gradients.
no code implementations • 10 May 2023 • Wenyuan Yang, Gongxi Zhu, Yuguo Yin, Hanlin Gu, Lixin Fan, Qiang Yang, Xiaochun Cao
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
no code implementations • 9 May 2023 • Sheng Wan, Dashan Gao, Hanlin Gu, Daning Hu
However, in reality, the number of overlapped users is often very small, thus largely limiting the performance of such approaches.
no code implementations • 8 May 2023 • Wenyuan Yang, Yuguo Yin, Gongxi Zhu, Hanlin Gu, Lixin Fan, Xiaochun Cao, Qiang Yang
Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.
no code implementations • 29 Apr 2023 • Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang
Different from existing CMOFL works focusing on utility, efficiency, fairness, and robustness, we consider optimizing privacy leakage along with utility loss and training cost, the three primary objectives of a TFL system.
no code implementations • 30 Jan 2023 • Hanlin Gu, Jiahuan Luo, Yan Kang, Lixin Fan, Qiang Yang
Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.
no code implementations • 24 Nov 2022 • Hanlin Gu, Lixin Fan, Xingxing Tang, Qiang Yang
Extensive experimental results under a variety of settings justify the superiority of FedCut, which demonstrates extremely robust model performance (MP) under various attacks.
no code implementations • 14 Nov 2022 • Shuo Shao, Wenyuan Yang, Hanlin Gu, Zhan Qin, Lixin Fan, Qiang Yang, Kui Ren
To deter such misbehavior, it is essential to establish a mechanism for verifying the ownership of the model and as well tracing its origin to the leaker among the FL participants.
no code implementations • 11 Mar 2022 • Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, Qiang Yang
In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms.
1 code implementation • 27 Sep 2021 • Bowen Li, Lixin Fan, Hanlin Gu, Jie Li, Qiang Yang
To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i. e., FedIPR.
no code implementations • 27 Sep 2021 • Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan YAO, Qiang Yang
To address the aforementioned perplexity, we propose a novel Bayesian Privacy (BP) framework which enables Bayesian restoration attacks to be formulated as the probability of reconstructing private data from observed public information.
1 code implementation • 17 Aug 2020 • Hanlin Gu, Yin Xian, Ilona Christy Unarta, Yuan YAO
Equipped with robust $\ell_1$ Autoencoder and some designs of robust $\beta$-GANs, one can stabilize the training of GANs and achieve the state-of-the-art performance of robust denoising with low SNR data and against possible information contamination.
1 code implementation • 20 Oct 2018 • Yin Xian, Hanlin Gu, Wei Wang, Xuhui Huang, Yuan YAO, Yang Wang, Jian-Feng Cai
We introduce the use of data-driven tight frame (DDTF) algorithm for cryo-EM image denoising.
Computation Image and Video Processing