cross-domain few-shot learning
31 papers with code • 1 benchmarks • 1 datasets
Its essence is transfer learning. The model needs to be trained in the source domain and then migrated to the target domain. Compliant with (1) the category in the target domain has never appeared in the source domain (2) the data distribution of the target domain is inconsistent with the source domain (3) each class in the target domain has very few labels
Latest papers
ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning
Concretely, to solve the data imbalance problem between the source data with sufficient examples and the auxiliary target data with limited examples, we build our model under the umbrella of multi-expert learning.
TGDM: Target Guided Dynamic Mixup for Cross-Domain Few-Shot Learning
The proposed TGDM framework contains a Mixup-3T network for learning classifiers and a dynamic ratio generation network (DRGN) for learning the optimal mix ratio.
Self-Supervision Can Be a Good Few-Shot Learner
Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.
Learn-to-Decompose: Cascaded Decomposition Network for Cross-Domain Few-Shot Facial Expression Recognition
Extensive experiments on both in-the-lab and in-the-wild compound expression datasets demonstrate the superiority of our proposed CDNet against several state-of-the-art FSL methods.
Graph Information Aggregation Cross-Domain Few-Shot Learning for Hyperspectral Image Classification
The IDE-block is used to characterize and aggregate the intradomain nonlocal relationships and the interdomain feature and distribution similarities are captured in the CSA-block.
Feature Extractor Stacking for Cross-domain Few-shot Learning
Recently published CDFSL methods generally construct a universal model that combines knowledge of multiple source domains into one feature extractor.
Universal Representations: A Unified Look at Multiple Task and Domain Learning
We propose a unified look at jointly learning multiple vision tasks and visual domains through universal representations, a single deep neural network.
Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning
The key challenge of CD-FSL lies in the huge data shift between source and target domains, which is typically in the form of totally different visual styles.
Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.
Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder
State of the art (SOTA) few-shot learning (FSL) methods suffer significant performance drop in the presence of domain differences between source and target datasets.