cross-domain few-shot learning

31 papers with code • 1 benchmarks • 1 datasets

Its essence is transfer learning. The model needs to be trained in the source domain and then migrated to the target domain. Compliant with (1) the category in the target domain has never appeared in the source domain (2) the data distribution of the target domain is inconsistent with the source domain (3) each class in the target domain has very few labels

Datasets


ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning

lovelyqian/ME-D2N_for_CDFSL 11 Oct 2022

Concretely, to solve the data imbalance problem between the source data with sufficient examples and the auxiliary target data with limited examples, we build our model under the umbrella of multi-expert learning.

22
11 Oct 2022

TGDM: Target Guided Dynamic Mixup for Cross-Domain Few-Shot Learning

lovelyqian/wave-SAN-CDFSL 11 Oct 2022

The proposed TGDM framework contains a Mixup-3T network for learning classifiers and a dynamic ratio generation network (DRGN) for learning the optimal mix ratio.

2
11 Oct 2022

Self-Supervision Can Be a Good Few-Shot Learner

bbbdylan/unisiam 19 Jul 2022

Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.

35
19 Jul 2022

Learn-to-Decompose: Cascaded Decomposition Network for Cross-Domain Few-Shot Facial Expression Recognition

zouxinyi0625/cdnet 16 Jul 2022

Extensive experiments on both in-the-lab and in-the-wild compound expression datasets demonstrate the superiority of our proposed CDNet against several state-of-the-art FSL methods.

16
16 Jul 2022

Graph Information Aggregation Cross-Domain Few-Shot Learning for Hyperspectral Image Classification

YuxiangZhang-BIT/IEEE_TNNLS_Gia-CFSL IEEE Transactions on Neural Networks and Learning Systems 2022

The IDE-block is used to characterize and aggregate the intradomain nonlocal relationships and the interdomain feature and distribution similarities are captured in the CSA-block.

37
30 Jun 2022

Feature Extractor Stacking for Cross-domain Few-shot Learning

hongyujerrywang/featureextractorstacking 12 May 2022

Recently published CDFSL methods generally construct a universal model that combines knowledge of multiple source domains into one feature extractor.

0
12 May 2022

Universal Representations: A Unified Look at Multiple Task and Domain Learning

VICO-UoE/URL 6 Apr 2022

We propose a unified look at jointly learning multiple vision tasks and visual domains through universal representations, a single deep neural network.

121
06 Apr 2022

Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning

lovelyqian/wave-SAN-CDFSL 15 Mar 2022

The key challenge of CD-FSL lies in the huge data shift between source and target domains, which is typically in the form of totally different visual styles.

2
15 Mar 2022

Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty

sungnyun/cd-fsl 1 Feb 2022

This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.

26
01 Feb 2022

Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder

Dipeshtamboli/Cross-Domain-FSL-via-NSAE ICCV 2021

State of the art (SOTA) few-shot learning (FSL) methods suffer significant performance drop in the presence of domain differences between source and target datasets.

2
11 Aug 2021