Self-Supervised Learning

1734 papers with code • 10 benchmarks • 41 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,747
11 papers
1,355
See all 10 libraries.

Latest papers with no code

An Effective Automated Speaking Assessment Approach to Mitigating Data Scarcity and Imbalanced Distribution

no code yet • 11 Apr 2024

Automated speaking assessment (ASA) typically involves automatic speech recognition (ASR) and hand-crafted feature extraction from the ASR transcript of a learner's speech.

Mitigating Object Dependencies: Improving Point Cloud Self-Supervised Learning through Object Exchange

no code yet • 11 Apr 2024

Subsequently, we introduce a context-aware feature learning strategy, which encodes object patterns without relying on their specific context by aggregating object features across various scenes.

Encoding Urban Ecologies: Automated Building Archetype Generation through Self-Supervised Learning for Energy Modeling

no code yet • 11 Apr 2024

As the global population and urbanization expand, the building sector has emerged as the predominant energy consumer and carbon emission contributor.

LaTiM: Longitudinal representation learning in continuous-time models to predict disease progression

no code yet • 10 Apr 2024

This work proposes a novel framework for analyzing disease progression using time-aware neural ordinary differential equations (NODE).

How to Craft Backdoors with Unlabeled Data Alone?

no code yet • 10 Apr 2024

Relying only on unlabeled data, Self-supervised learning (SSL) can learn rich features in an economical and scalable way.

Wild Visual Navigation: Fast Traversability Learning via Pre-Trained Models and Online Self-Supervision

no code yet • 10 Apr 2024

Natural environments such as forests and grasslands are challenging for robotic navigation because of the false perception of rigid obstacles from high grass, twigs, or bushes.

Anomaly Detection in Electrocardiograms: Advancing Clinical Diagnosis Through Self-Supervised Learning

no code yet • 7 Apr 2024

We introduce a novel self-supervised learning framework for ECG AD, utilizing a vast dataset of normal ECGs to autonomously detect and localize cardiac anomalies.

HDR Imaging for Dynamic Scenes with Events

no code yet • 4 Apr 2024

High dynamic range imaging (HDRI) for real-world dynamic scenes is challenging because moving objects may lead to hybrid degradation of low dynamic range and motion blur.

Multi-modal Learning for WebAssembly Reverse Engineering

no code yet • 4 Apr 2024

WasmRev is pre-trained using self-supervised learning on a large-scale multi-modal corpus encompassing source code, code documentation and the compiled WebAssembly, without requiring labeled data.

Generative-Contrastive Heterogeneous Graph Neural Network

no code yet • 3 Apr 2024

In recent years, inspired by self-supervised learning, contrastive Heterogeneous Graphs Neural Networks (HGNNs) have shown great potential by utilizing data augmentation and discriminators for downstream tasks.