Unsupervised Pre-training
103 papers with code • 2 benchmarks • 7 datasets
Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.
Libraries
Use these libraries to find Unsupervised Pre-training models and implementationsMost implemented papers
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data
Transfer learning approaches can reduce the data requirements of deep learning algorithms.
An Analysis of Unsupervised Pre-training in Light of Recent Advances
We discover unsupervised pre-training, as expected, helps when the ratio of unsupervised to supervised samples is high, and surprisingly, hurts when the ratio is low.
Data-dependent Initializations of Convolutional Neural Networks
Convolutional Neural Networks spread through computer vision like a wildfire, impacting almost all visual tasks imaginable.
BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning
Multi-task learning shares information between related tasks, sometimes reducing the number of parameters required.
Unsupervised Pre-Training of Image Features on Non-Curated Data
Our goal is to bridge the performance gap between unsupervised methods trained on curated data, which are costly to obtain, and massive raw datasets that are easily available.
Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video
The experiments show that the proposed architecture is state-of-the-art in the domain of egocentric videos, achieving top performances in the 2019 EPIC-Kitchens egocentric action anticipation challenge.
PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding
To this end, we select a suite of diverse datasets and tasks to measure the effect of unsupervised pre-training on a large source set of 3D scenes.
UP-DETR: Unsupervised Pre-training for Object Detection with Transformers
DEtection TRansformer (DETR) for object detection reaches competitive performance compared with Faster R-CNN via a transformer encoder-decoder architecture.
OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised Learning
With this in mind, we propose a teacher-student scheme to learn representations by training a convolutional net to reconstruct a bag-of-visual-words (BoW) representation of an image, given as input a perturbed version of that same image.
End-to-End Training of Neural Retrievers for Open-Domain Question Answering
We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.