Unsupervised Pre-training

103 papers with code • 2 benchmarks • 7 datasets

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Libraries

Use these libraries to find Unsupervised Pre-training models and implementations
2 papers
29,282

Latest papers with no code

Semi-Supervised End-To-End Contrastive Learning For Time Series Classification

no code yet • 13 Oct 2023

The unsupervised, supervised contrastive losses and classification loss are jointly used to optimize the encoder and classifier.

Automated clinical coding using off-the-shelf large language models

no code yet • 10 Oct 2023

The task of assigning diagnostic ICD codes to patient hospital admissions is typically performed by expert human coders.

CUPre: Cross-domain Unsupervised Pre-training for Few-Shot Cell Segmentation

no code yet • 6 Oct 2023

While pre-training on object detection tasks, such as Common Objects in Contexts (COCO) [1], could significantly boost the performance of cell segmentation, it still consumes on massive fine-annotated cell images [2] with bounding boxes, masks, and cell types for every cell in every image, to fine-tune the pre-trained model.

Pre-Training and Fine-Tuning Generative Flow Networks

no code yet • 5 Oct 2023

However, as they are typically trained from a given extrinsic reward function, it remains an important open challenge about how to leverage the power of pre-training and train GFlowNets in an unsupervised fashion for efficient adaptation to downstream tasks.

Classifying Whole Slide Images: What Matters?

no code yet • 5 Oct 2023

Recently there have been many algorithms proposed for the classification of very high resolution whole slide images (WSIs).

DP-SGD for non-decomposable objective functions

no code yet • 4 Oct 2023

To overcome this issue, we develop a new DP-SGD variant for similarity based loss functions -- in particular the commonly used contrastive loss -- that manipulates gradients of the objective function in a novel way to obtain a senstivity of the summed gradient that is $O(1)$ for batch size $n$.

A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting)

no code yet • 30 Sep 2023

This paper presents a comprehensive exploration of the evolution of prompt engineering and generation in the field of natural language processing (NLP).

Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project

no code yet • 26 Sep 2023

In this thesis, we describe our efforts to construct ASR systems for a conversational telephone speech recognition task in the medical domain for Vietnamese language to assist emergency room contact between doctors and patients across linguistic barriers.

Examining the Effect of Pre-training on Time Series Classification

no code yet • 11 Sep 2023

(iv) Adding more pre-training data does not improve generalization, but it can strengthen the advantage of pre-training on the original data volume, such as faster convergence.

Enhancing the vocal range of single-speaker singing voice synthesis with melody-unsupervised pre-training

no code yet • 1 Sep 2023

Specifically, in the pre-training step, we design a phoneme predictor to produce the frame-level phoneme probability vectors as the phonemic timing information and a speaker encoder to model the timbre variations of different singers, and directly estimate the frame-level f0 values from the audio to provide the pitch information.