Dimensionality Reduction
729 papers with code • 0 benchmarks • 10 datasets
Dimensionality reduction is the task of reducing the dimensionality of a dataset.
( Image credit: openTSNE )
Benchmarks
These leaderboards are used to track progress in Dimensionality Reduction
Libraries
Use these libraries to find Dimensionality Reduction models and implementationsDatasets
Latest papers with no code
Formation-Controlled Dimensionality Reduction
Dimensionality reduction represents the process of generating a low dimensional representation of high dimensional data.
Dimensionality Reduction in Sentence Transformer Vector Databases with Fast Fourier Transform
This paper advocates for the broader adoption of FFT in vector database management, marking a significant stride towards addressing the challenges of data volume and complexity in AI research and applications.
Tangling-Untangling Cycle for Efficient Learning
A new insight brought by this work is to introduce class labels as the context variables in the lifted higher-dimensional space (so supervised learning becomes unsupervised learning).
Variational Bayesian Optimal Experimental Design with Normalizing Flows
Variational OED (vOED), in contrast, estimates a lower bound of the EIG without likelihood evaluations by approximating the posterior distributions with variational forms, and then tightens the bound by optimizing its variational parameters.
CAVIAR: Categorical-Variable Embeddings for Accurate and Robust Inference
Social science research often hinges on the relationship between categorical variables and outcomes.
Low-Rank Robust Subspace Tensor Clustering for Metro Passenger Flow Modeling
Moreover, a case study in the station clustering based on real passenger flow data is conducted, with quite valuable insights discovered.
Human Activity Recognition using Smartphones
In our project, we have created an Android application that recognizes the daily human activities and calculate the calories burnt in real time.
Non-negative Subspace Feature Representation for Few-shot Learning in Medical Imaging
Extensive empirical studies are conducted in terms of validating the effectiveness of NMF, especially its supervised variants (e. g., discriminative NMF, and supervised and constrained NMF with sparseness), and the comparison with principal component analysis (PCA), i. e., the collaborative representation-based dimensionality reduction technique derived from eigenvectors.
Preventing Model Collapse in Gaussian Process Latent Variable Models
Gaussian process latent variable models (GPLVMs) are a versatile family of unsupervised learning models, commonly used for dimensionality reduction.
On the reduction of Linear Parameter-Varying State-Space models
This paper presents an overview and comparative study of the state of the art in State-Order Reduction (SOR) and Scheduling Dimension Reduction (SDR) for Linear Parameter-Varying (LPV) State-Space (SS) models, comparing and benchmarking their capabilities, limitations and performance.