Browse SoTA > Methodology > Representation Learning

Representation Learning

645 papers with code · Methodology

Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.

( Image credit: Visualizing and Understanding Convolutional Networks )

Benchmarks

Latest papers with code

dMelodies: A Music Dataset for Disentanglement Learning

29 Jul 2020ashispati/dmelodies_dataset

In this paper, we present a new symbolic music dataset that will help researchers working on disentanglement problems demonstrate the efficacy of their algorithms on diverse domains.

REPRESENTATION LEARNING

2
29 Jul 2020

DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation

22 Jul 2020alexandre01/deepsvg

Scalable Vector Graphics (SVG) are ubiquitous in modern 2D interfaces due to their ability to scale to different resolutions.

REPRESENTATION LEARNING VECTOR GRAPHICS ANIMATION

236
22 Jul 2020

Unsupervised Deep Representation Learning for Real-Time Tracking

22 Jul 2020594422814/UDT

The advancement of visual tracking has continuously been brought by deep learning models.

REPRESENTATION LEARNING VISUAL TRACKING

121
22 Jul 2020

Edge-aware Graph Representation Learning and Reasoning for Face Parsing

22 Jul 2020tegusi/EAGRNet

Specifically, we encode a facial image onto a global graph representation where a collection of pixels ("regions") with similar features are projected to each vertex.

GRAPH REPRESENTATION LEARNING

16
22 Jul 2020

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

21 Jul 2020bethgelab/slow_disentanglement

We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos.

UNSUPERVISED REPRESENTATION LEARNING

35
21 Jul 2020

Second-Order Pooling for Graph Neural Networks

20 Jul 2020divelab/sopool

In addition, compared to existing graph pooling methods, second-order pooling is able to use information from all nodes and collect second-order statistics, making it more powerful.

GRAPH CLASSIFICATION GRAPH REPRESENTATION LEARNING LINK PREDICTION NODE CLASSIFICATION

2
20 Jul 2020

Towards Deeper Graph Neural Networks

18 Jul 2020mengliu1998/DeeperGNN

Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.

GRAPH REPRESENTATION LEARNING NODE CLASSIFICATION

33
18 Jul 2020

Generative Pretraining from Pixels

Preprint 2020 openai/image-gpt

Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.

SELF-SUPERVISED IMAGE CLASSIFICATION UNSUPERVISED REPRESENTATION LEARNING

689
17 Jul 2020

Learning Semantics-enriched Representation via Self-discovery, Self-classification, and Self-restoration

14 Jul 2020JLiangLab/SemanticGenesis

To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis.

REPRESENTATION LEARNING SELF-SUPERVISED LEARNING TRANSFER LEARNING

10
14 Jul 2020

PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer

13 Jul 2020d-li14/PSConv

Despite their strong modeling capacities, Convolutional Neural Networks (CNNs) are often scale-sensitive.

REPRESENTATION LEARNING

62
13 Jul 2020