Search Results

Contrastive Multiview Coding

HobbitLong/PyContrast ECCV 2020

We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics.

Contrastive Learning Self-Supervised Action Recognition +1

Momentum Contrast for Unsupervised Visual Representation Learning

HobbitLong/PyContrast CVPR 2020

This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.

Contrastive Learning Representation Learning +1

Unsupervised Feature Learning via Non-Parametric Instance Discrimination

HobbitLong/PyContrast CVPR 2018

Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so.

General Classification object-detection +4

Self-Supervised Learning of Pretext-Invariant Representations

HobbitLong/PyContrast CVPR 2020

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.

Contrastive Learning object-detection +5

What Makes for Good Views for Contrastive Learning?

HobbitLong/PyContrast NeurIPS 2020

Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning.

Contrastive Learning Data Augmentation +8

Improved Baselines with Momentum Contrastive Learning

HobbitLong/PyContrast 9 Mar 2020

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

Contrastive Learning Data Augmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.