Self-Supervised Learning

1749 papers with code • 10 benchmarks • 41 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,758
11 papers
1,357
See all 11 libraries.

Most implemented papers

Self-Supervised Learning of Pretext-Invariant Representations

facebookresearch/vissl CVPR 2020

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.

Revisiting Self-Supervised Visual Representation Learning

google/revisiting-self-supervised CVPR 2019

Unsupervised visual representation learning remains a largely unsolved problem in computer vision research.

TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech

s3prl/s3prl 12 Jul 2020

We present a large-scale comparison of various self-supervised models.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training

open-mmlab/mmselfsup CVPR 2021

Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only <1% slower), but demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation; and outperforms the state-of-the-art methods by a large margin.

Self-Supervised Learning with Swin Transformers

SwinTransformer/Transformer-SSL 10 May 2021

We are witnessing a modeling shift from CNN to Transformers in computer vision.

VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

facebookresearch/vicreg NeurIPS 2021

Recent self-supervised methods for image representation learning are based on maximizing the agreement between embedding vectors from different views of the same image.

Context Autoencoder for Self-Supervised Representation Learning

atten4vis/cae 7 Feb 2022

The pretraining tasks include two tasks: masked representation prediction - predict the representations for the masked patches, and masked patch reconstruction - reconstruct the masked patches.

Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised Learning

s3prl/s3prl 5 Jun 2020

To explore this issue, we proposed to employ Mockingjay, a self-supervised learning based model, to protect anti-spoofing models against adversarial attacks in the black-box scenario.

Understanding self-supervised Learning Dynamics without Contrastive Pairs

facebookresearch/luckmatters 12 Feb 2021

While contrastive approaches of self-supervised learning (SSL) learn representations by minimizing the distance between two augmented views of the same data point (positive pairs) and maximizing views from different data points (negative pairs), recent \emph{non-contrastive} SSL (e. g., BYOL and SimSiam) show remarkable performance {\it without} negative pairs, with an extra learnable predictor and a stop-gradient operation.

SUPERB: Speech processing Universal PERformance Benchmark

s3prl/s3prl 3 May 2021

SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data.