Self-Supervised Learning
1737 papers with code • 10 benchmarks • 41 datasets
Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.
Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration
Image source: LeCun
Libraries
Use these libraries to find Self-Supervised Learning models and implementationsDatasets
Latest papers
LGSDF: Continual Global Learning of Signed Distance Fields Aided by Local Updating
Implicit reconstruction of ESDF (Euclidean Signed Distance Field) involves training a neural network to regress the signed distance from any point to the nearest obstacle, which has the advantages of lightweight storage and continuous querying.
Test-Time Zero-Shot Temporal Action Localization
To this aim, we introduce a novel method that performs Test-Time adaptation for Temporal Action Localization (T3AL).
Mixup Domain Adaptations for Dynamic Remaining Useful Life Predictions
MDAN encompasses a three-staged mechanism where the mix-up strategy is not only performed to regularize the source and target domains but also applied to establish an intermediate mix-up domain where the source and target domains are aligned.
MedIAnomaly: A comparative study of anomaly detection in medical images
Anomaly detection (AD) aims at detecting abnormal samples that deviate from the expected normal patterns.
A Comprehensive Survey on Self-Supervised Learning for Recommendation
Recommender systems play a crucial role in tackling the challenge of information overload by delivering personalized recommendations based on individual user preferences.
A Unified Membership Inference Method for Visual Self-supervised Encoder via Part-aware Capability
In this setting, considering that self-supervised model could be trained by completely different self-supervised paradigms, e. g., masked image modeling and contrastive learning, with complex training details, we propose a unified membership inference method called PartCrop.
SelfPose3d: Self-Supervised Multi-Person Multi-View 3d Pose Estimation
Unlike current state-of-the-art fully-supervised methods, our approach does not require any 2d or 3d ground-truth poses and uses only the multi-view input images from a calibrated camera setup and 2d pseudo poses generated from an off-the-shelf 2d human pose estimator.
Harnessing Data and Physics for Deep Learning Phase Recovery
Two main deep learning phase recovery strategies are data-driven (DD) with supervised learning mode and physics-driven (PD) with self-supervised learning mode.
HypeBoy: Generative Self-Supervised Representation Learning on Hypergraphs
Based on the generative SSL task, we propose a hypergraph SSL method, HypeBoy.
DailyMAE: Towards Pretraining Masked Autoencoders in One Day
Recently, masked image modeling (MIM), an important self-supervised learning (SSL) method, has drawn attention for its effectiveness in learning data representation from unlabeled data.