Self-Supervised Learning

1734 papers with code • 10 benchmarks • 41 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,745
11 papers
1,355
See all 10 libraries.

TFPred: Learning Discriminative Representations from Unlabeled Data for Few-Label Rotating Machinery Fault Diagnosis

Xiaohan-Chen/TFPred Control Engineering Practice 2024

Recent advances in intelligent rotating machinery fault diagnosis have been enabled by the availability of massive labeled training data.

17
01 May 2024

Vim4Path: Self-Supervised Vision Mamba for Histopathology Images

atlasanalyticslab/vim4path 20 Apr 2024

Multi-instance learning methods have addressed this challenge, leveraging image patches to classify slides utilizing pretrained models using Self-Supervised Learning (SSL) approaches.

7
20 Apr 2024

Moving Object Segmentation: All You Need Is SAM (and Flow)

Jyxarthur/flowsam 18 Apr 2024

The objective of this paper is motion segmentation -- discovering and segmenting the moving objects in a video.

146
18 Apr 2024

Hypergraph Self-supervised Learning with Sampling-efficient Signals

coco-hut/se-hssl 18 Apr 2024

Self-supervised learning (SSL) provides a promising alternative for representation learning on hypergraphs without costly labels.

1
18 Apr 2024

Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology

recursionpharma/maes_microscopy 16 Apr 2024

Featurizing microscopy images for use in biological research remains a significant challenge, especially for large-scale experiments spanning millions of images.

13
16 Apr 2024

Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition

tub-cv-group/conclugen 16 Apr 2024

To that end, we examine the performance of learning through different combinations of self-supervised tasks on the facial expression recognition downstream task.

8
16 Apr 2024

How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model

mazurowski-lab/finetune-sam 15 Apr 2024

Automated segmentation is a fundamental medical image analysis task, which enjoys significant advances due to the advent of deep learning.

24
15 Apr 2024

Can We Break Free from Strong Data Augmentations in Self-Supervised Learning?

neurai-lab/ssl-prior 15 Apr 2024

Self-supervised learning (SSL) has emerged as a promising solution for addressing the challenge of limited labeled data in deep neural networks (DNNs), offering scalability potential.

0
15 Apr 2024

An Experimental Comparison Of Multi-view Self-supervised Methods For Music Tagging

deezer/multi-view-ssl-benchmark 14 Apr 2024

In this study, we expand the scope of pretext tasks applied to music by investigating and comparing the performance of new self-supervised methods for music tagging.

2
14 Apr 2024

DEGNN: Dual Experts Graph Neural Network Handling Both Edge and Node Feature Noise

taihasegawa/degnn 14 Apr 2024

Leveraging these modified representations, DEGNN subsequently addresses downstream tasks, ensuring robustness against noise present in both edges and node features of real-world graphs.

2
14 Apr 2024