Browse SoTA > Methodology > Representation Learning

# Representation Learning Edit

645 papers with code · Methodology

Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.

( Image credit: Visualizing and Understanding Convolutional Networks )

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

# grid2vec: Learning Efficient Visual Representations via Flexible Grid-Graphs

30 Jul 2020

We compare the performance of $grid2vec$ with a set of state-of-the-art representation learning and visual recognition models.

# Learning Video Representations from Textual Web Supervision

29 Jul 2020

Based on this observation, we propose to use such text as a method for learning video representations.

# Unsupervised Generative Adversarial Alignment Representation for Sheet music, Audio and Lyrics

29 Jul 2020

In this paper, we propose an unsupervised generative adversarial alignment representation (UGAAR) model to learn deep discriminative representations shared across three major musical modalities: sheet music, lyrics, and audio, where a deep neural network based architecture on three branches is jointly trained.

# Learning Representations for Axis-Aligned Decision Forests through Input Perturbation

29 Jul 2020

Axis-aligned decision forests have long been the leading class of machine learning algorithms for modeling tabular data.

# Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases

28 Jul 2020

Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet.

# Self-Supervised Contrastive Learning for Unsupervised Phoneme Segmentation

27 Jul 2020

Results suggest that our approach surpasses the baseline models and reaches state-of-the-art performance on both data sets.

# Label-Consistency based Graph Neural Networks for Semi-supervised Node Classification

27 Jul 2020

Graph neural networks (GNNs) achieve remarkable success in graph-based semi-supervised node classification, leveraging the information from neighboring nodes to improve the representation learning of target node.

# Representation Learning with Video Deep InfoMax

27 Jul 2020

DeepInfoMax (DIM) is a self-supervised method which leverages the internal structure of deep networks to construct such views, forming prediction tasks between local features which depend on small patches in an image and global features which depend on the whole image.

# Contrastive Visual-Linguistic Pretraining

26 Jul 2020

We evaluate CVLP on several down-stream tasks, including VQA, GQA and NLVR2 to validate the superiority of contrastive learning on multi-modality representation learning.

# Robust and Generalizable Visual Representation Learning via Random Convolutions

25 Jul 2020

While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust.