no code implementations • 6 Feb 2024 • Xiaoxin Su, Yipeng Zhou, Laizhong Cui, Song Guo
Recently, federated learning (FL) has gained momentum because of its capability in preserving data privacy.
no code implementations • 6 Feb 2024 • Xiaoxin Su, Yipeng Zhou, Laizhong Cui, John C. S. Lui, Jiangchuan Liu
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds, without touching private data owned by individual clients.
no code implementations • 12 Aug 2022 • Laizhong Cui, Xiaoxin Su, Yipeng Zhou
Recently, blockchain-based federated learning (BFL) has attracted intensive research attention due to that the training process is auditable and the architecture is serverless avoiding the single point failure of the parameter server in vanilla federated learning (VFL).
no code implementations • 13 Dec 2021 • Laizhong Cui, Xiaoxin Su, Yipeng Zhou, Jiangchuan Liu
Federated Learning (FL) incurs high communication overhead, which can be greatly alleviated by compression for model updates.
no code implementations • 10 May 2021 • Laizhong Cui, Xiaoxin Su, Yipeng Zhou, Yi Pan
Then, we further propose the boosted MUCSC (B-MUCSC) algorithm, a biased compression algorithm that can achieve an extremely high compression rate by grouping insignificant model updates into a super cluster.