Representation Learning

3699 papers with code • 5 benchmarks • 9 datasets

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Libraries

Use these libraries to find Representation Learning models and implementations

Latest papers with no code

Generalizing Multi-Step Inverse Models for Representation Learning to Finite-Memory POMDPs

no code yet • 22 Apr 2024

In this work, we consider the problem of discovering the agent-centric state in the more challenging high-dimensional non-Markovian setting, when the state can be decoded from a sequence of past observations.

SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling

no code yet • 21 Apr 2024

We propose a novel Subgraph Pattern GNN (SPGNN) architecture that incorporates these enhancements.

Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation

no code yet • 21 Apr 2024

We are able to enforce conditional independence of the diffusion autoencoder latent representation with respect to any protected attribute under the equalized odds constraint and show that this approach enables causal image generation with controllable latent spaces.

Fermi-Bose Machine

no code yet • 21 Apr 2024

Distinct from human cognitive processing, deep neural networks trained by backpropagation can be easily fooled by adversarial examples.

Joint Quality Assessment and Example-Guided Image Processing by Disentangling Picture Appearance from Content

no code yet • 20 Apr 2024

The deep learning revolution has strongly impacted low-level image processing tasks such as style/domain transfer, enhancement/restoration, and visual quality assessments.

Wills Aligner: A Robust Multi-Subject Brain Representation Learner

no code yet • 20 Apr 2024

We meticulously evaluate the performance of our approach across coarse-grained and fine-grained visual decoding tasks.

GraphMatcher: A Graph Representation Learning Approach for Ontology Matching

no code yet • 20 Apr 2024

Ontology matching is defined as finding a relationship or correspondence between two or more entities in two or more ontologies.

Purposer: Putting Human Motion Generation in Context

no code yet • 19 Apr 2024

We present a novel method to generate human motion to populate 3D indoor scenes.

Foundation Model assisted Weakly Supervised LiDAR Semantic Segmentation

no code yet • 19 Apr 2024

Furthermore, to mitigate the influence of erroneous pseudo labels obtained from sparse annotations on point cloud features, we propose a multi-modal weakly supervised network for LiDAR semantic segmentation, called MM-ScatterNet.

OPTiML: Dense Semantic Invariance Using Optimal Transport for Self-Supervised Medical Image Representation

no code yet • 18 Apr 2024

In response to these constraints, we introduce a novel SSL framework OPTiML, employing optimal transport (OT), to capture the dense semantic invariance and fine-grained details, thereby enhancing the overall effectiveness of SSL in medical image representation learning.