Self-Supervised Learning
1688 papers with code • 10 benchmarks • 41 datasets
Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.
Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration
Image source: LeCun
Libraries
Use these libraries to find Self-Supervised Learning models and implementationsDatasets
Latest papers with no code
Exploring the Task-agnostic Trait of Self-supervised Learning in the Context of Detecting Mental Disorders
In the context of the SSL model predicting masked frames, the generated global representations are also noted to exhibit task-agnostic traits.
Trajectory Regularization Enhances Self-Supervised Geometric Representation
To address this gap, we introduce a new pose-estimation benchmark for assessing SSL geometric representations, which demands training without semantic or pose labels and achieving proficiency in both semantic and geometric downstream tasks.
Self-Supervised Backbone Framework for Diverse Agricultural Vision Tasks
Computer vision in agriculture is game-changing with its ability to transform farming into a data-driven, precise, and sustainable industry.
Point-DETR3D: Leveraging Imagery Data with Spatial Point Prior for Weakly Semi-supervised 3D Object Detection
Training high-accuracy 3D detectors necessitates massive labeled 3D annotations with 7 degree-of-freedom, which is laborious and time-consuming.
Exploring Green AI for Audio Deepfake Detection
In contrast to existing methods that fine-tune SSL models and employ additional deep neural networks for downstream tasks, we exploit classical machine learning algorithms such as logistic regression and shallow neural networks using the SSL embeddings extracted using the pre-trained model.
AdaProj: Adaptively Scaled Angular Margin Subspace Projections for Anomalous Sound Detection with Auxiliary Classification Tasks
The state-of-the-art approach for semi-supervised anomalous sound detection is to first learn an embedding space by using auxiliary classification tasks based on meta information or self-supervised learning and then estimate the distribution of normal data.
Federated Semi-supervised Learning for Medical Image Segmentation with intra-client and inter-client Consistency
The intra-client and inter-client consistency learning are introduced to smooth predictions at the data level and avoid confirmation bias of local models.
Learning Cross-view Visual Geo-localization without Ground Truth
We observe that training on unlabeled cross-view images presents significant challenges, including the need to establish relationships within unlabeled data and reconcile view discrepancies between uncertain queries and references.
Low-Trace Adaptation of Zero-shot Self-supervised Blind Image Denoising
Deep learning-based denoiser has been the focus of recent development on image denoising.
Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition
Facial Expression Recognition (FER) is a critical task within computer vision with diverse applications across various domains.