Federated Learning
1269 papers with code • 12 benchmarks • 11 datasets
Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.
This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.
Libraries
Use these libraries to find Federated Learning models and implementationsDatasets
Latest papers
Confidential Federated Computations
Federated Learning and Analytics (FLA) have seen widespread adoption by technology platforms for processing sensitive on-device data.
Personalized Federated Learning via Stacking
Traditional Federated Learning (FL) methods typically train a single global model collaboratively without exchanging raw data.
SpamDam: Towards Privacy-Preserving and Adversary-Resistant SMS Spam Detection
In this study, we introduce SpamDam, a SMS spam detection framework designed to overcome key challenges in detecting and understanding SMS spam, such as the lack of public SMS spam datasets, increasing privacy concerns of collecting SMS data, and the need for adversary-resistant detection models.
FLEX: FLEXible Federated Learning Framework
In the realm of Artificial Intelligence (AI), the need for privacy and security in data processing has become paramount.
pfl-research: simulation framework for accelerating research in Private Federated Learning
Federated learning (FL) is an emerging machine learning (ML) training paradigm where clients own their data and collaborate to train a global model, without revealing any data to the server and other participants.
Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning
Hence, inspired by the sparse neural networks, we introduce a hybrid sparse Byzantine attack that is composed of two parts: one exhibiting a sparse nature and attacking only certain NN locations with higher sensitivity, and the other being more silent but accumulating over time, where each ideally targets a different type of defence mechanism, and together they form a strong but imperceptible attack.
Approximate Gradient Coding for Privacy-Flexible Federated Learning with Non-IID Data
This work focuses on the challenges of non-IID data and stragglers/dropouts in federated learning.
Open-Vocabulary Federated Learning with Multimodal Prototyping
A new user could come up with queries that involve data from unseen classes, and such open-vocabulary queries would directly defect such FL systems.
Computation and Communication Efficient Lightweighting Vertical Federated Learning
Moreover, we establish a convergence bound for our LVFL algorithm, which accounts for both communication and computational lightweighting ratios.
Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.