Unsupervised Pre-training

103 papers with code • 2 benchmarks • 7 datasets

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Libraries

Use these libraries to find Unsupervised Pre-training models and implementations
2 papers
29,192

Most implemented papers

TabTransformer: Tabular Data Modeling Using Contextual Embeddings

lucidrains/tab-transformer-pytorch 11 Dec 2020

We propose TabTransformer, a novel deep tabular data modeling architecture for supervised and semi-supervised learning.

Leveraging Pre-trained Checkpoints for Sequence Generation Tasks

huggingface/transformers TACL 2020

Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.

A Transformer-based Framework for Multivariate Time Series Representation Learning

gzerveas/mvts_transformer 6 Oct 2020

In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series.

How far can we go without convolution: Improving fully-connected networks

hantek/zlinnet 9 Nov 2015

We propose ways to improve the performance of fully connected networks.

wav2vec: Unsupervised Pre-training for Speech Recognition

pytorch/fairseq 11 Apr 2019

Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available.

Multilingual Constituency Parsing with Self-Attention and Pre-Training

nikitakit/self-attentive-parser ACL 2019

We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions.

Spatiotemporal Contrastive Video Representation Learning

tensorflow/models CVPR 2021

Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.

Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

ducha-aiki/LSUVinit 20 Dec 2013

We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times.

SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning

YihengZhang-CV/SeCo-Sequence-Contrastive-Learning 3 Aug 2020

In this paper, we compose a trilogy of exploring the basic and generic supervision in the sequence from spatial, spatiotemporal and sequential perspectives.

Self-training and Pre-training are Complementary for Speech Recognition

pytorch/fairseq 22 Oct 2020

Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data.