Dimensionality Reduction
725 papers with code • 0 benchmarks • 10 datasets
Dimensionality reduction is the task of reducing the dimensionality of a dataset.
( Image credit: openTSNE )
Benchmarks
These leaderboards are used to track progress in Dimensionality Reduction
Libraries
Use these libraries to find Dimensionality Reduction models and implementationsDatasets
Latest papers with no code
Non-negative Subspace Feature Representation for Few-shot Learning in Medical Imaging
Extensive empirical studies are conducted in terms of validating the effectiveness of NMF, especially its supervised variants (e. g., discriminative NMF, and supervised and constrained NMF with sparseness), and the comparison with principal component analysis (PCA), i. e., the collaborative representation-based dimensionality reduction technique derived from eigenvectors.
Preventing Model Collapse in Gaussian Process Latent Variable Models
Gaussian process latent variable models (GPLVMs) are a versatile family of unsupervised learning models, commonly used for dimensionality reduction.
On the reduction of Linear Parameter-Varying State-Space models
This paper presents an overview and comparative study of the state of the art in State-Order Reduction (SOR) and Scheduling Dimension Reduction (SDR) for Linear Parameter-Varying (LPV) State-Space (SS) models, comparing and benchmarking their capabilities, limitations and performance.
Learning Intersections of Halfspaces with Distribution Shift: Improved Algorithms and SQ Lower Bounds
Recent work of Klivans, Stavropoulos, and Vasilyan initiated the study of testable learning with distribution shift (TDS learning), where a learner is given labeled samples from training distribution $\mathcal{D}$, unlabeled samples from test distribution $\mathcal{D}'$, and the goal is to output a classifier with low error on $\mathcal{D}'$ whenever the training samples pass a corresponding test.
Comparison of Methods in Human Skin Decomposition
Decomposition of skin pigment plays an important role in medical fields.
Nonparametric Bellman Mappings for Reinforcement Learning: Application to Robust Adaptive Filtering
This paper designs novel nonparametric Bellman mappings in reproducing kernel Hilbert spaces (RKHSs) for reinforcement learning (RL).
Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach
These features are not only used as a dimensionality reduction approach for the machine learning models, but also as way to evaluate the explainability capabilities of the different models using agnostic and non-agnostic explainability techniques.
Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons
Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms.
Representatividad Muestral en la Incertidumbre Simétrica Multivariada para la Selección de Atributos
In this work, we analyze the behavior of the multivariate symmetric uncertainty (MSU) measure through the use of statistical simulation techniques under various mixes of informative and non-informative randomly generated features.
Visualizing High-Dimensional Temporal Data Using Direction-Aware t-SNE
Most existing dimensionality reduction techniques, such as t-SNE and UMAP, do not take into account the temporal or relational nature of the data when constructing the embeddings, resulting in temporally cluttered visualizations that obscure potentially interesting patterns.