Search Results for author: Salar Fattahi

Found 23 papers, 2 papers with code

Convergence of Gradient Descent with Small Initialization for Unregularized Matrix Completion

no code implementations9 Feb 2024 Jianhao Ma, Salar Fattahi

In the over-parameterized regime where $r'\geq r$, we show that, with $\widetilde\Omega(dr^9)$ observations, GD with an initial point $\|\rm{U}_0\| \leq \epsilon$ converges near-linearly to an $\epsilon$-neighborhood of $\rm{X}^\star$.

Matrix Completion

Solution Path of Time-varying Markov Random Fields with Discrete Regularization

no code implementations25 Jul 2023 Salar Fattahi, Andres Gomez

More specifically, we show that the entire solution path of the time-varying MRF for all sparsity levels can be obtained in $\mathcal{O}(pT^3)$, where $T$ is the number of time steps and $p$ is the number of unknown parameters at any given time.

Robust Sparse Mean Estimation via Incremental Learning

1 code implementation24 May 2023 Jianhao Ma, Rui Ray Chen, Yinghui He, Salar Fattahi, Wei Hu

This paper presents a simple mean estimator that overcomes both challenges under moderate conditions: it runs in near-linear time and memory (both with respect to the ambient dimension) while requiring only $\tilde O(k)$ samples to recover the true mean.

Incremental Learning

Can Learning Be Explained By Local Optimality In Low-rank Matrix Recovery?

no code implementations21 Feb 2023 Jianhao Ma, Salar Fattahi

In matrix completion, even with slight rank overestimation and mild noise, true solutions either emerge as non-critical or strict saddle points.

Matrix Completion

Simple Alternating Minimization Provably Solves Complete Dictionary Learning

no code implementations23 Oct 2022 Geyu Liang, Gavin Zhang, Salar Fattahi, Richard Y. Zhang

This paper focuses on complete dictionary learning problem, where the goal is to reparametrize a set of given signals as linear combinations of atoms from a learned dictionary.

Dictionary Learning

Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition

1 code implementation1 Oct 2022 Jianhao Ma, Lingjun Guo, Salar Fattahi

This work analyzes the solution trajectory of gradient-based algorithms via a novel basis function decomposition.

Tensor Decomposition

Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution

no code implementations15 Jul 2022 Jianhao Ma, Salar Fattahi

This work characterizes the effect of depth on the optimization landscape of linear regression, showing that, despite their nonconvexity, deeper models have more desirable optimization landscape.

Efficient Inference of Spatially-varying Gaussian Markov Random Fields with Applications in Gene Regulatory Networks

no code implementations21 Jun 2022 Visweswaran Ravikumar, Tong Xu, Wajd N. Al-Holou, Salar Fattahi, Arvind Rao

In this paper, we study the problem of inferring spatially-varying Gaussian Markov random fields (SV-GMRF) where the goal is to learn a network of sparse, context-specific GMRFs representing network relationships between genes.

Preconditioned Gradient Descent for Overparameterized Nonconvex Burer--Monteiro Factorization with Global Optimality Certification

no code implementations7 Jun 2022 Gavin Zhang, Salar Fattahi, Richard Y. Zhang

We consider using gradient descent to minimize the nonconvex function $f(X)=\phi(XX^{T})$ over an $n\times r$ factor matrix $X$, in which $\phi$ is an underlying smooth convex cost function defined over $n\times n$ matrices.

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

no code implementations17 Feb 2022 Jianhao Ma, Salar Fattahi

We prove that a simple SubGM with small initialization is agnostic to both over-parameterization and noise in the measurements.

Preconditioned Gradient Descent for Over-Parameterized Nonconvex Matrix Factorization

no code implementations NeurIPS 2021 Jialun Zhang, Salar Fattahi, Richard Zhang

This over-parameterized regime of matrix factorization significantly slows down the convergence of local search algorithms, from a linear rate with $r=r^{\star}$ to a sublinear rate when $r>r^{\star}$.

Scalable Inference of Sparsely-changing Gaussian Markov Random Fields

no code implementations NeurIPS 2021 Salar Fattahi, Andres Gomez

Most of the existing methods for the inference of time-varying Markov random fields (MRFs) rely on the \textit{regularized maximum likelihood estimation} (MLE), that typically suffer from weak statistical guarantees and high computational time.

Scalable Inference of Sparsely-changing Markov Random Fields with Strong Statistical Guarantees

no code implementations NeurIPS 2021 Salar Fattahi, Andres Gomez

In this paper, we study the problem of inferring time-varying Markov random fields (MRF), where the underlying graphical model is both sparse and changes sparsely over time.

Sign-RIP: A Robust Restricted Isometry Property for Low-rank Matrix Recovery

no code implementations5 Feb 2021 Jianhao Ma, Salar Fattahi

Restricted isometry property (RIP), essentially stating that the linear measurements are approximately norm-preserving, plays a crucial role in studying low-rank matrix recovery problem.

Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples

no code implementations8 Oct 2020 Salar Fattahi

In this paper, we will remedy this undesirable dependency on the system dimension by introducing an $\ell_1$-regularized estimation method that can accurately estimate the Markov parameters of the system, provided that the number of samples scale logarithmically with the system dimension.

Efficient Learning of Distributed Linear-Quadratic Controllers

no code implementations21 Sep 2019 Salar Fattahi, Nikolai Matni, Somayeh Sojoudi

In this work, we propose a robust approach to design distributed controllers for unknown-but-sparse linear and time-invariant systems.

Learning Sparse Dynamical Systems from a Single Sample Trajectory

no code implementations20 Apr 2019 Salar Fattahi, Nikolai Matni, Somayeh Sojoudi

In particular, we show that the proposed estimator can correctly identify the sparsity pattern of the system matrices with high probability, provided that the length of the sample trajectory exceeds a threshold.

Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis

no code implementations30 Dec 2018 Salar Fattahi, Somayeh Sojoudi

In particular, it is shown that a constant fraction of the measurements could be grossly corrupted and yet they would not create any spurious local solution.

Sample Complexity of Sparse System Identification Problem

no code implementations21 Mar 2018 Salar Fattahi, Somayeh Sojoudi

A by-product of this result is that the number of sample trajectories required for sparse system identification is significantly smaller than the dimension of the system.

Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion

no code implementations ICML 2018 Richard Y. Zhang, Salar Fattahi, Somayeh Sojoudi

The sparse inverse covariance estimation problem is commonly solved using an $\ell_{1}$-regularized Gaussian maximum likelihood estimator known as "graphical lasso", but its computational cost becomes prohibitive for large data sets.

Matrix Completion

Sparse Inverse Covariance Estimation for Chordal Structures

no code implementations24 Nov 2017 Salar Fattahi, Richard Y. Zhang, Somayeh Sojoudi

We have also derived a closed-form solution that is optimal when the thresholded sample covariance matrix has an acyclic structure.

Matrix Completion

Graphical Lasso and Thresholding: Equivalence and Closed-form Solutions

no code implementations30 Aug 2017 Salar Fattahi, Somayeh Sojoudi

The objective of this paper is to compare the computationally-heavy GL technique with a numerically-cheap heuristic method that is based on simply thresholding the sample covariance matrix.

Cannot find the paper you are looking for? You can Submit a new open access paper.