Matrix Completion
131 papers with code • 0 benchmarks • 4 datasets
Matrix Completion is a method for recovering lost information. It originates from machine learning and usually deals with highly sparse matrices. Missing or unknown data is estimated using the low-rank matrix of the known data.
Source: A Fast Matrix-Completion-Based Approach for Recommendation Systems
Benchmarks
These leaderboards are used to track progress in Matrix Completion
Libraries
Use these libraries to find Matrix Completion models and implementationsLatest papers
Multiple Imputation with Neural Network Gaussian Process for High-dimensional Incomplete Data
Single imputation methods such as matrix completion methods do not adequately account for imputation uncertainty and hence would yield improper statistical inference.
A Generalized Latent Factor Model Approach to Mixed-data Matrix Completion with Entrywise Consistency
Matrix completion is a class of machine learning methods that concerns the prediction of missing entries in a partially observed matrix.
Hyperparameter optimization in deep multi-target prediction
As a result of the ever increasing complexity of configuring and fine-tuning machine learning models, the field of automated machine learning (AutoML) has emerged over the past decade.
Bounded Simplex-Structured Matrix Factorization: Algorithms, Identifiability and Applications
In this paper, we propose a new low-rank matrix factorization model dubbed bounded simplex-structured matrix factorization (BSSMF).
Accelerating SGD for Highly Ill-Conditioned Huge-Scale Online Matrix Completion
The matrix completion problem seeks to recover a $d\times d$ ground truth matrix of low rank $r\ll d$ from observations of its individual elements.
Matrix Completion with Cross-Concentrated Sampling: Bridging Uniform Sampling and CUR Sampling
While uniform sampling has been widely studied in the matrix completion literature, CUR sampling approximates a low-rank matrix via row and column samples.
Adaptive and Implicit Regularization for Matrix Completion
Theoretically, we show that the adaptive regularization of \ReTwo{AIR} enhances the implicit regularization and vanishes at the end of training.
Forecasting Algorithms for Causal Inference with Panel Data
Conducting causal inference with panel data is a core challenge in social science research.
A Perturbation Bound on the Subspace Estimator from Canonical Projections
This paper derives a perturbation bound on the optimal subspace estimator obtained from a subset of its canonical projections contaminated by noise.
Sensing Theorems for Unsupervised Learning in Linear Inverse Problems
In this paper, we present necessary and sufficient sensing conditions for learning the signal model from measurement data alone which only depend on the dimension of the model and the number of operators or properties of the group action that the model is invariant to.