Source-Free Few-Shot Domain Adaptation

29 Sep 2021  ·  Wenyu Zhang, Li Shen, Chuan-Sheng Foo, Wanyue Zhang ·

Deep models are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data. Test-time adaptation of pre-trained source models with streaming unlabelled target data is an attractive setting that protects the privacy of source data, but it has mini-batch size and class-distribution requirements on the streaming data which might not be desirable in practice. In this paper, we propose the source-free few-shot adaptation setting to address these practical challenges in deploying test-time adaptation. Specifically, we propose a constrained optimization of source model batch normalization layers by finetuning linear combination coefficients between training and support statistics. The proposed method is easy to implement and improves source model performance with as little as one labelled target sample per class. We evaluate on different multi-domain classification datasets. Experiments demonstrate that our proposed method achieves comparable or better performance than test-time adaptation, while not constrained by streaming conditions.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods