Representation learning is concerned with training machine learning algorithms to learn useful representations, e.g. those that are interpretable, have latent features, or can be used for transfer learning.
( Image credit: Visualizing and Understanding Convolutional Networks )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In this paper, we present a new symbolic music dataset that will help researchers working on disentanglement problems demonstrate the efficacy of their algorithms on diverse domains.
Scalable Vector Graphics (SVG) are ubiquitous in modern 2D interfaces due to their ability to scale to different resolutions.
Ranked #1 on Vector Graphics Animation on SVG-Icons8
The advancement of visual tracking has continuously been brought by deep learning models.
Specifically, we encode a facial image onto a global graph representation where a collection of pixels ("regions") with similar features are projected to each vertex.
We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos.
In addition, compared to existing graph pooling methods, second-order pooling is able to use information from all nodes and collect second-order statistics, making it more powerful.
Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
Ranked #1 on Node Classification on Coauthor Physics
Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.
Ranked #15 on Self-Supervised Image Classification on ImageNet
To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis.
Despite their strong modeling capacities, Convolutional Neural Networks (CNNs) are often scale-sensitive.