Representation Learning

3704 papers with code • 5 benchmarks • 9 datasets

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Libraries

Use these libraries to find Representation Learning models and implementations

Masked Image Modeling as a Framework for Self-Supervised Learning across Eye Movements

faceonlive/ai-research 12 Apr 2024

To make sense of their surroundings, intelligent systems must transform complex sensory inputs to structured codes that are reduced to task-relevant information such as object category.

181
12 Apr 2024

SpectralMamba: Efficient Mamba for Hyperspectral Image Classification

danfenghong/spectralmamba 12 Apr 2024

Recurrent neural networks and Transformers have recently dominated most applications in hyperspectral (HS) imaging, owing to their capability to capture long-range dependencies from spectrum sequences.

27
12 Apr 2024

TSLANet: Rethinking Transformers for Time Series Representation Learning

emadeldeen24/tslanet 12 Apr 2024

Time series data, characterized by its intrinsic long and short-range dependencies, poses a unique challenge across analytical applications.

22
12 Apr 2024

Adaptive Fair Representation Learning for Personalized Fairness in Recommendations via Information Alignment

faceonlive/ai-research 11 Apr 2024

The existing works often treat a fairness requirement, represented as a collection of sensitive attributes, as a hyper-parameter, and pursue extreme fairness by completely removing information of sensitive attributes from the learned fair embedding, which suffer from two challenges: huge training cost incurred by the explosion of attribute combinations, and the suboptimal trade-off between fairness and accuracy.

181
11 Apr 2024

Representation Learning of Tangled Key-Value Sequence Data for Early Classification

faceonlive/ai-research 11 Apr 2024

To address this problem, we propose a novel method, i. e., Key-Value sequence Early Co-classification (KVEC), which leverages both inner- and inter-correlations of items in a tangled key-value sequence through key correlation and value correlation to learn a better sequence representation.

181
11 Apr 2024

MindBridge: A Cross-Subject Brain Decoding Framework

littlepure2333/mindbridge 11 Apr 2024

Currently, brain decoding is confined to a per-subject-per-model paradigm, limiting its applicability to the same individual for whom the decoding model is trained.

37
11 Apr 2024

Advancing Real-time Pandemic Forecasting Using Large Language Models: A COVID-19 Case Study

faceonlive/ai-research 10 Apr 2024

Forecasting the short-term spread of an ongoing disease outbreak is a formidable challenge due to the complexity of contributing factors, some of which can be characterized through interlinked, multi-modality variables such as epidemiological time series data, viral biology, population demographics, and the intersection of public policy and human behavior.

181
10 Apr 2024

VI-OOD: A Unified Representation Learning Framework for Textual Out-of-distribution Detection

faceonlive/ai-research 9 Apr 2024

Out-of-distribution (OOD) detection plays a crucial role in ensuring the safety and reliability of deep neural networks in various applications.

181
09 Apr 2024

ActNetFormer: Transformer-ResNet Hybrid Method for Semi-Supervised Action Recognition in Videos

faceonlive/ai-research 9 Apr 2024

Our framework leverages both labeled and unlabelled data to robustly learn action representations in videos, combining pseudo-labeling with contrastive learning for effective learning from both types of samples.

181
09 Apr 2024

BiSHop: Bi-Directional Cellular Learning for Tabular Data with Generalized Sparse Modern Hopfield Model

magics-lab/bishop 4 Apr 2024

We introduce the \textbf{B}i-Directional \textbf{S}parse \textbf{Hop}field Network (\textbf{BiSHop}), a novel end-to-end framework for deep tabular learning.

1
04 Apr 2024