no code implementations • 21 Mar 2024 • Naichen Shi, Salar Fattahi, Raed Al Kontar
In this work, we study the problem of common and unique feature extraction from noisy data.
no code implementations • 9 Feb 2024 • Jianhao Ma, Salar Fattahi
In the over-parameterized regime where $r'\geq r$, we show that, with $\widetilde\Omega(dr^9)$ observations, GD with an initial point $\|\rm{U}_0\| \leq \epsilon$ converges near-linearly to an $\epsilon$-neighborhood of $\rm{X}^\star$.
no code implementations • 25 Jul 2023 • Salar Fattahi, Andres Gomez
More specifically, we show that the entire solution path of the time-varying MRF for all sparsity levels can be obtained in $\mathcal{O}(pT^3)$, where $T$ is the number of time steps and $p$ is the number of unknown parameters at any given time.
1 code implementation • 24 May 2023 • Jianhao Ma, Rui Ray Chen, Yinghui He, Salar Fattahi, Wei Hu
This paper presents a simple mean estimator that overcomes both challenges under moderate conditions: it runs in near-linear time and memory (both with respect to the ambient dimension) while requiring only $\tilde O(k)$ samples to recover the true mean.
no code implementations • 21 Feb 2023 • Jianhao Ma, Salar Fattahi
In matrix completion, even with slight rank overestimation and mild noise, true solutions either emerge as non-critical or strict saddle points.
no code implementations • 23 Oct 2022 • Geyu Liang, Gavin Zhang, Salar Fattahi, Richard Y. Zhang
This paper focuses on complete dictionary learning problem, where the goal is to reparametrize a set of given signals as linear combinations of atoms from a learned dictionary.
1 code implementation • 1 Oct 2022 • Jianhao Ma, Lingjun Guo, Salar Fattahi
This work analyzes the solution trajectory of gradient-based algorithms via a novel basis function decomposition.
no code implementations • 15 Jul 2022 • Jianhao Ma, Salar Fattahi
This work characterizes the effect of depth on the optimization landscape of linear regression, showing that, despite their nonconvexity, deeper models have more desirable optimization landscape.
no code implementations • 21 Jun 2022 • Visweswaran Ravikumar, Tong Xu, Wajd N. Al-Holou, Salar Fattahi, Arvind Rao
In this paper, we study the problem of inferring spatially-varying Gaussian Markov random fields (SV-GMRF) where the goal is to learn a network of sparse, context-specific GMRFs representing network relationships between genes.
no code implementations • 7 Jun 2022 • Gavin Zhang, Salar Fattahi, Richard Y. Zhang
We consider using gradient descent to minimize the nonconvex function $f(X)=\phi(XX^{T})$ over an $n\times r$ factor matrix $X$, in which $\phi$ is an underlying smooth convex cost function defined over $n\times n$ matrices.
no code implementations • 17 Feb 2022 • Jianhao Ma, Salar Fattahi
We prove that a simple SubGM with small initialization is agnostic to both over-parameterization and noise in the measurements.
no code implementations • NeurIPS 2021 • Jialun Zhang, Salar Fattahi, Richard Zhang
This over-parameterized regime of matrix factorization significantly slows down the convergence of local search algorithms, from a linear rate with $r=r^{\star}$ to a sublinear rate when $r>r^{\star}$.
no code implementations • NeurIPS 2021 • Salar Fattahi, Andres Gomez
Most of the existing methods for the inference of time-varying Markov random fields (MRFs) rely on the \textit{regularized maximum likelihood estimation} (MLE), that typically suffer from weak statistical guarantees and high computational time.
no code implementations • NeurIPS 2021 • Salar Fattahi, Andres Gomez
In this paper, we study the problem of inferring time-varying Markov random fields (MRF), where the underlying graphical model is both sparse and changes sparsely over time.
no code implementations • 5 Feb 2021 • Jianhao Ma, Salar Fattahi
Restricted isometry property (RIP), essentially stating that the linear measurements are approximately norm-preserving, plays a crucial role in studying low-rank matrix recovery problem.
no code implementations • 8 Oct 2020 • Salar Fattahi
In this paper, we will remedy this undesirable dependency on the system dimension by introducing an $\ell_1$-regularized estimation method that can accurately estimate the Markov parameters of the system, provided that the number of samples scale logarithmically with the system dimension.
no code implementations • 21 Sep 2019 • Salar Fattahi, Nikolai Matni, Somayeh Sojoudi
In this work, we propose a robust approach to design distributed controllers for unknown-but-sparse linear and time-invariant systems.
no code implementations • 20 Apr 2019 • Salar Fattahi, Nikolai Matni, Somayeh Sojoudi
In particular, we show that the proposed estimator can correctly identify the sparsity pattern of the system matrices with high probability, provided that the length of the sample trajectory exceeds a threshold.
no code implementations • 30 Dec 2018 • Salar Fattahi, Somayeh Sojoudi
In particular, it is shown that a constant fraction of the measurements could be grossly corrupted and yet they would not create any spurious local solution.
no code implementations • 21 Mar 2018 • Salar Fattahi, Somayeh Sojoudi
A by-product of this result is that the number of sample trajectories required for sparse system identification is significantly smaller than the dimension of the system.
no code implementations • ICML 2018 • Richard Y. Zhang, Salar Fattahi, Somayeh Sojoudi
The sparse inverse covariance estimation problem is commonly solved using an $\ell_{1}$-regularized Gaussian maximum likelihood estimator known as "graphical lasso", but its computational cost becomes prohibitive for large data sets.
no code implementations • 24 Nov 2017 • Salar Fattahi, Richard Y. Zhang, Somayeh Sojoudi
We have also derived a closed-form solution that is optimal when the thresholded sample covariance matrix has an acyclic structure.
no code implementations • 30 Aug 2017 • Salar Fattahi, Somayeh Sojoudi
The objective of this paper is to compare the computationally-heavy GL technique with a numerically-cheap heuristic method that is based on simply thresholding the sample covariance matrix.