Browse SoTA > Computer Vision > Self-Supervised Learning

Self-Supervised Learning

132 papers with code · Computer Vision

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Benchmarks

Greatest papers with code

Time-Contrastive Networks: Self-Supervised Learning from Video

23 Apr 2017tensorflow/models

While representations are learned from an unlabeled collection of task-related videos, robot behaviors such as pouring are learned by watching a single 3rd-person demonstration by a human.

METRIC LEARNING SELF-SUPERVISED LEARNING VIDEO ALIGNMENT

TabNet: Attentive Interpretable Tabular Learning

20 Aug 2019google-research/google-research

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet.

DECISION MAKING FEATURE SELECTION SELF-SUPERVISED LEARNING UNSUPERVISED REPRESENTATION LEARNING

Temporal Cycle-Consistency Learning

CVPR 2019 google-research/google-research

We introduce a self-supervised representation learning method based on the task of temporal alignment between videos.

ANOMALY DETECTION REPRESENTATION LEARNING SELF-SUPERVISED LEARNING VIDEO ALIGNMENT

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

20 Jun 2020pytorch/fairseq

When lowering the amount of labeled data to one hour, our model outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data.

QUANTIZATION SELF-SUPERVISED LEARNING SPEECH RECOGNITION

vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations

ICLR 2020 pytorch/fairseq

We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task.

SELF-SUPERVISED LEARNING SPEECH RECOGNITION

A Framework For Contrastive Self-Supervised Learning And Designing A New Approach

31 Aug 2020PyTorchLightning/pytorch-lightning

Contrastive self-supervised learning (CSL) is an approach to learn useful representations by solving a pretext task that selects and compares anchor, negative and positive (APN) features from an unlabeled dataset.

DATA AUGMENTATION IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING

Bootstrap your own latent: A new approach to self-supervised Learning

13 Jun 2020deepmind/deepmind-research

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION