LW-FedSSL: Resource-efficient Layer-wise Federated Self-supervised Learning

22 Jan 2024  ·  Ye Lin Tun, Chu Myaet Thwal, Le Quang Huy, Minh N. H. Nguyen, Choong Seon Hong ·

Many studies integrate federated learning (FL) with self-supervised learning (SSL) to take advantage of raw training data distributed across edge devices. However, edge devices often struggle with high computation and communication costs imposed by SSL and FL algorithms. To tackle this hindrance, we propose LW-FedSSL, a layer-wise federated self-supervised learning approach that allows edge devices to incrementally train a single layer of the model at a time. Our LW-FedSSL comprises server-side calibration and representation alignment mechanisms to maintain comparable performance with end-to-end federated self-supervised learning (FedSSL) while significantly lowering clients' resource requirements. In a pure layer-wise training scheme, training one layer at a time may limit effective interaction between different layers of the model. The server-side calibration mechanism takes advantage of the resource-rich server in an FL environment to ensure smooth collaboration between different layers of the global model. During the local training process, the representation alignment mechanism encourages closeness between representations of FL local models and those of the global model, thereby preserving the layer cohesion established by server-side calibration. Our experiments show that LW-FedSSL has a $3.3 \times$ lower memory requirement and a $3.2 \times$ cheaper communication cost than its end-to-end counterpart. We also explore a progressive training strategy called Prog-FedSSL that outperforms end-to-end training with a similar memory requirement and a $1.8 \times$ cheaper communication cost.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here