Few-Shot Model Agnostic Federated Learning

Federated learning has received increasing attention for its ability to collaborative learning without leaking privacy. Promising advances have been achieved under the assumption that participants share the same model structure. However, when participants independently customize their models, models suffer communication barriers, which leads the model heterogeneity problem. Moreover, in real scenarios, the data held by participants is often limited, making the local models trained only on private data present poor performance. Consequently, this paper studies a new challenging problem, namely few-shot model agnostic federated learning, where the local participants design their independent models from their limited private datasets. Considering the scarcity of the private data, we propose to utilize the abundant public available datasets for bridging the gap between local private participants. However, its usage also brings in two problems: inconsistent labels and large domain gap between the public and private datasets. To address these issues, this paper presents a novel framework with two main parts: 1) model agnostic federated learning, it performs public-private communication by unifying the model prediction outputs on the shared public datasets; 2) latent embedding adaptation, it addresses the domain gap with an adversarial learning scheme to discriminate the public and private domains. Together with theoretical generalization bound analysis, comprehensive experiments under various settings have verified our advantage over existing methods. It provides a simple but effective baseline for future advancement. The code is available at https://github.com/WenkeHuang/FSFL.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here