no code implementations • 17 Jun 2023 • Shitian Li, Chunlin Tian, Kahou Tam, Rui Ma, Li Li
In this systematic survey, we aim to explore the current state-of-the-art techniques for breaking on-device training memory walls, focusing on methods that can enable larger and more complex models to be trained on resource-constrained devices.
1 code implementation • 24 Jun 2021 • Kahou Tam, Li Li, Bo Han, Chengzhong Xu, Huazhu Fu
Federated learning (FL) collaboratively trains a shared global model depending on multiple local clients, while keeping the training data decentralized in order to preserve data privacy.