Self-Supervised Learning

1731 papers with code • 10 benchmarks • 41 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,743
11 papers
1,355
See all 10 libraries.

Most implemented papers

Digging Into Self-Supervised Monocular Depth Estimation

nianticlabs/monodepth2 4 Jun 2018

Per-pixel ground-truth depth data is challenging to acquire at scale.

Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images

TaoHuang2018/Neighbor2Neighbor CVPR 2021

In this paper, we present a very simple yet effective method named Neighbor2Neighbor to train an effective image denoising model with only noisy images.

DeiT III: Revenge of the ViT

facebookresearch/deit 14 Apr 2022

Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT.

ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders

facebookresearch/convnext-v2 CVPR 2023

This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation.

data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language

pytorch/fairseq Preprint 2022

While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind.

Whitening for Self-Supervised Representation Learning

htdt/self-supervised 13 Jul 2020

Most of the current self-supervised representation learning (SSL) methods are based on the contrastive loss and the instance-discrimination task, where augmented versions of the same image instance ("positives") are contrasted with instances extracted from other images ("negatives").

Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning

open-mmlab/mmselfsup NeurIPS 2020

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

An Empirical Study of Training Self-Supervised Vision Transformers

facebookresearch/moco-v3 ICCV 2021

In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT.

Time-Contrastive Networks: Self-Supervised Learning from Video

tensorflow/models 23 Apr 2017

While representations are learned from an unlabeled collection of task-related videos, robot behaviors such as pouring are learned by watching a single 3rd-person demonstration by a human.

Charting the Right Manifold: Manifold Mixup for Few-shot Learning

nupurkmr9/S2M2_fewshot 28 Jul 2019

A recent regularization technique - Manifold Mixup focuses on learning a general-purpose representation, robust to small changes in the data distribution.