Federated Learning

1217 papers with code • 12 benchmarks • 11 datasets

Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.

This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.

Libraries

Use these libraries to find Federated Learning models and implementations

Latest papers with no code

Spikewhisper: Temporal Spike Backdoor Attacks on Federated Neuromorphic Learning over Low-power Devices

no code yet • 27 Mar 2024

Federated neuromorphic learning (FedNL) leverages event-driven spiking neural networks and federated learning frameworks to effectively execute intelligent analysis tasks over amounts of distributed low-power devices but also perform vulnerability to poisoning attacks.

CoRAST: Towards Foundation Model-Powered Correlated Data Analysis in Resource-Constrained CPS and IoT

no code yet • 27 Mar 2024

Foundation models (FMs) emerge as a promising solution to harness distributed and diverse environmental data by leveraging prior knowledge to understand the complicated temporal and spatial correlations within heterogeneous datasets.

FRESCO: Federated Reinforcement Energy System for Cooperative Optimization

no code yet • 27 Mar 2024

The rise in renewable energy is creating new dynamics in the energy grid that promise to create a cleaner and more participative energy grid, where technology plays a crucial part in making the required flexibility to achieve the vision of the next-generation grid.

Generalized Policy Learning for Smart Grids: FL TRPO Approach

no code yet • 27 Mar 2024

The smart grid domain requires bolstering the capabilities of existing energy management systems; Federated Learning (FL) aligns with this goal as it demonstrates a remarkable ability to train models on heterogeneous datasets while maintaining data privacy, making it suitable for smart grid applications, which often involve disparate data distributions and interdependencies among features that hinder the suitability of linear models.

Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates

no code yet • 27 Mar 2024

Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.

GPFL: A Gradient Projection-Based Client Selection Framework for Efficient Federated Learning

no code yet • 26 Mar 2024

Federated learning client selection is crucial for determining participant clients while balancing model accuracy and communication efficiency.

Secure Aggregation is Not Private Against Membership Inference Attacks

no code yet • 26 Mar 2024

In this paper, we delve into the privacy implications of SecAgg by treating it as a local differential privacy (LDP) mechanism for each local update.

Enhancing Privacy in Federated Learning through Local Training

no code yet • 26 Mar 2024

In this paper we propose the federated private local training algorithm (Fed-PLT) for federated learning, to overcome the challenges of (i) expensive communications and (ii) privacy preservation.

Not All Federated Learning Algorithms Are Created Equal: A Performance Evaluation Study

no code yet • 26 Mar 2024

(3) However, algorithms such as FedDyn and SCAFFOLD are more prone to catastrophic failures without the support of additional techniques such as gradient clipping.

Leak and Learn: An Attacker's Cookbook to Train Using Leaked Data from Federated Learning

no code yet • 26 Mar 2024

We demonstrate the effectiveness of both GI and LLL attacks in maliciously training models using the leaked data more accurately than a benign federated learning strategy.