Self-Supervised Learning
1734 papers with code • 10 benchmarks • 41 datasets
Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.
Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration
Image source: LeCun
Libraries
Use these libraries to find Self-Supervised Learning models and implementationsDatasets
Latest papers
TFPred: Learning Discriminative Representations from Unlabeled Data for Few-Label Rotating Machinery Fault Diagnosis
Recent advances in intelligent rotating machinery fault diagnosis have been enabled by the availability of massive labeled training data.
Vim4Path: Self-Supervised Vision Mamba for Histopathology Images
Multi-instance learning methods have addressed this challenge, leveraging image patches to classify slides utilizing pretrained models using Self-Supervised Learning (SSL) approaches.
Moving Object Segmentation: All You Need Is SAM (and Flow)
The objective of this paper is motion segmentation -- discovering and segmenting the moving objects in a video.
Hypergraph Self-supervised Learning with Sampling-efficient Signals
Self-supervised learning (SSL) provides a promising alternative for representation learning on hypergraphs without costly labels.
Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology
Featurizing microscopy images for use in biological research remains a significant challenge, especially for large-scale experiments spanning millions of images.
Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition
To that end, we examine the performance of learning through different combinations of self-supervised tasks on the facial expression recognition downstream task.
How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model
Automated segmentation is a fundamental medical image analysis task, which enjoys significant advances due to the advent of deep learning.
Can We Break Free from Strong Data Augmentations in Self-Supervised Learning?
Self-supervised learning (SSL) has emerged as a promising solution for addressing the challenge of limited labeled data in deep neural networks (DNNs), offering scalability potential.
An Experimental Comparison Of Multi-view Self-supervised Methods For Music Tagging
In this study, we expand the scope of pretext tasks applied to music by investigating and comparing the performance of new self-supervised methods for music tagging.
DEGNN: Dual Experts Graph Neural Network Handling Both Edge and Node Feature Noise
Leveraging these modified representations, DEGNN subsequently addresses downstream tasks, ensuring robustness against noise present in both edges and node features of real-world graphs.