Self-Supervised Learning

1749 papers with code • 10 benchmarks • 41 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,758
11 papers
1,357
See all 11 libraries.

Latest papers with no code

HYPE: Hyperbolic Entailment Filtering for Underspecified Images and Texts

no code yet • 26 Apr 2024

In an era where the volume of data drives the effectiveness of self-supervised learning, the specificity and clarity of data semantics play a crucial role in model training.

Self-supervised visual learning in the low-data regime: a comparative evaluation

no code yet • 26 Apr 2024

Self-Supervised Learning (SSL) is a valuable and robust training methodology for contemporary Deep Neural Networks (DNNs), enabling unsupervised pretraining on a `pretext task' that does not require ground-truth labels/annotation.

Neural Modes: Self-supervised Learning of Nonlinear Modal Subspaces

no code yet • 26 Apr 2024

We propose a self-supervised approach for learning physics-based subspaces for real-time simulation.

Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud

no code yet • 25 Apr 2024

To this end, we introduce a sequencer that orders point cloud tokens to efficiently compute and utilize tokens proximity based on their indices during target and context selection.

MiM: Mask in Mask Self-Supervised Pre-Training for 3D Medical Image Analysis

no code yet • 24 Apr 2024

We further scale up the MiM to large pre-training datasets with more than 10k volumes, showing that large-scale pre-training can further enhance the performance of downstream tasks.

S2DEVFMAP: Self-Supervised Learning Framework with Dual Ensemble Voting Fusion for Maximizing Anomaly Prediction in Timeseries

no code yet • 24 Apr 2024

Traditional anomaly detection methods often face challenges in handling diverse data characteristics and variations in noise levels, resulting in limited effectiveness.

Additive Margin in Contrastive Self-Supervised Frameworks to Learn Discriminative Speaker Representations

no code yet • 23 Apr 2024

Implementing these two modifications to SimCLR improves performance and results in 7. 85% EER on VoxCeleb1-O, outperforming other equivalent methods.

Non-Uniform Exposure Imaging via Neuromorphic Shutter Control

no code yet • 22 Apr 2024

To address this challenge, we propose a novel Neuromorphic Shutter Control (NSC) system to avoid motion blurs and alleviate instant noises, where the extremely low latency of events is leveraged to monitor the real-time motion and facilitate the scene-adaptive exposure.

Text-dependent Speaker Verification (TdSV) Challenge 2024: Challenge Evaluation Plan

no code yet • 20 Apr 2024

This document outlines the Text-dependent Speaker Verification (TdSV) Challenge 2024, which centers on analyzing and exploring novel approaches for text-dependent speaker verification.

Hyperspectral Anomaly Detection with Self-Supervised Anomaly Prior

no code yet • 20 Apr 2024

The majority of existing hyperspectral anomaly detection (HAD) methods use the low-rank representation (LRR) model to separate the background and anomaly components, where the anomaly component is optimized by handcrafted sparse priors (e. g., $\ell_{2, 1}$-norm).