Self-Supervised Learning
1688 papers with code • 10 benchmarks • 41 datasets
Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.
Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration
Image source: LeCun
Libraries
Use these libraries to find Self-Supervised Learning models and implementationsDatasets
Latest papers
GenView: Enhancing View Quality with Pretrained Generative Model for Self-Supervised Learning
To tackle these challenges, we present GenView, a controllable framework that augments the diversity of positive views leveraging the power of pretrained generative models while preserving semantics.
Learning Useful Representations of Recurrent Neural Network Weight Matrices
The program of an RNN is its weight matrix.
A Versatile Framework for Multi-scene Person Re-identification
To overcome significant variations between images across camera views, mountains of variants of ReID models were developed for solving a number of challenges, such as resolution change, clothing change, occlusion, modality change, and so on.
Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
In response to these challenges, we propose Genetic Evolution-Nurtured Adversarial Fine-tuning (Gen-AF), a two-stage adversarial fine-tuning approach aimed at enhancing the robustness of downstream models.
Self-Supervised Learning for Time Series: Contrastive or Generative?
In this paper, we will present a comprehensive comparative study between contrastive and generative methods in time series.
LAFS: Landmark-based Facial Self-supervised Learning for Face Recognition
This enables our method - namely LAndmark-based Facial Self-supervised learning LAFS), to learn key representation that is more critical for face recognition.
SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised Learning for Robust Infrared Small Target Detection
The quality, quantity, and diversity of the infrared dataset are critical to the detection of small targets.
Self-Supervision in Time for Satellite Images(S3-TSS): A novel method of SSL technique in Satellite images
With the limited availability of labeled data with various atmospheric conditions in remote sensing images, it seems useful to work with self-supervised algorithms.
Self-supervised Photographic Image Layout Representation Learning
This shortfall makes the learning process for photographic image layouts suboptimal.
FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models
However, recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model when adversaries, posed as benign clients, are present in a group of clients.