Federated Learning

1250 papers with code • 12 benchmarks • 11 datasets

Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.

This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.

Libraries

Use these libraries to find Federated Learning models and implementations

Latest papers with no code

FedGreen: Carbon-aware Federated Learning with Model Size Adaptation

no code yet • 23 Apr 2024

Federated learning (FL) provides a promising collaborative framework to build a model from distributed clients, and this work investigates the carbon emission of the FL process.

FL-TAC: Enhanced Fine-Tuning in Federated Learning via Low-Rank, Task-Specific Adapter Clustering

no code yet • 23 Apr 2024

Although large-scale pre-trained models hold great potential for adapting to downstream tasks through fine-tuning, the performance of such fine-tuned models is often limited by the difficulty of collecting sufficient high-quality, task-specific data.

Advances and Open Challenges in Federated Learning with Foundation Models

no code yet • 23 Apr 2024

The integration of Foundation Models (FMs) with Federated Learning (FL) presents a transformative paradigm in Artificial Intelligence (AI), offering enhanced capabilities while addressing concerns of privacy, data decentralization, and computational efficiency.

FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning

no code yet • 22 Apr 2024

Subgraph federated learning (subgraph-FL) is a new distributed paradigm that facilitates the collaborative training of graph neural networks (GNNs) by multi-client subgraphs.

Fair Concurrent Training of Multiple Models in Federated Learning

no code yet • 22 Apr 2024

We show how our fairness-based learning and incentive mechanisms impact training convergence and finally evaluate our algorithm with multiple sets of learning tasks on real world datasets.

Machine Learning Techniques for MRI Data Processing at Expanding Scale

no code yet • 22 Apr 2024

Imaging sites around the world generate growing amounts of medical scan data with ever more versatile and affordable technology.

Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction

no code yet • 22 Apr 2024

Federated Learning (FL) offers a distributed framework to train a global control model across multiple base stations without compromising the privacy of their local network data.

Apodotiko: Enabling Efficient Serverless Federated Learning in Heterogeneous Environments

no code yet • 22 Apr 2024

Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients while keeping the data decentralized.

Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning

no code yet • 22 Apr 2024

Considering the characteristics of trigger generation, data poisoning and model training in backdoor attack, this paper designs a backdoor attack method based on federated learning.

FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization

no code yet • 21 Apr 2024

In federated learning, particularly in cross-device scenarios, secure aggregation has recently gained popularity as it effectively defends against inference attacks by malicious aggregators.