Contrastive Predictive Coding (CPC), based on predicting future segments of speech based on past segments is emerging as a powerful algorithm for representation learning of speech signal.
Visual referring expression recognition is a challenging task that requires natural language understanding in the context of an image.
To handle this problem, in this paper, learning transferrable representations from unlabeled PolSAR data through convolutional architectures is explored for the first time.
We present Domain Contrast (DC), a simple yet effective approach inspired by contrastive learning for training domain adaptive detectors.
Perceptual learning approaches like perceptual loss are empirically powerful for such tasks but they usually rely on the pre-trained classification network to provide features, which are not necessarily optimal in terms of visual perception of image transformation.
Extensive experiments on ImageNet dataset have been conducted to prove the effectiveness of our method.
CONTRASTIVE LEARNING DEEP CLUSTERING FEW-SHOT IMAGE CLASSIFICATION OBJECT DETECTION REPRESENTATION LEARNING SELF-SUPERVISED LEARNING SEMANTIC SEGMENTATION TRANSFER LEARNING UNSUPERVISED IMAGE CLASSIFICATION
In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues.
Enhancing feature transferability by matching marginal distributions has led to improvements in domain adaptation, although this is at the expense of feature discrimination.