1 code implementation • 23 Feb 2024 • Xi Chen, Zhewen Hou, Christopher A. Metzler, Arian Maleki, Shirin Jalali
We investigate both the theoretical and algorithmic aspects of likelihood-based methods for recovering a complex-valued signal from multiple sets of measurements, referred to as looks, affected by speckle (multiplicative) noise.
1 code implementation • 13 Dec 2020 • Ziyi Meng, Shirin Jalali, Xin Yuan
The hardware encoder typically consists of an (optical) imaging system designed to capture {compressed measurements}.
no code implementations • NeurIPS 2019 • Shirin Jalali, Carl Nuzman, Iraj Saniee
The universal approximation theorem states that any regular function can be approximated closely using a single hidden layer neural network.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Pei Peng, Shirin Jalali, Xin Yuan
Compressed sensing is about recovering a structured high-dimensional signal ${\bf x}\in R^n$ from its under-determined noisy linear measurements ${\bf y}\in R^m$, where $m\ll n$.
no code implementations • 15 Feb 2019 • Shirin Jalali, Carl Nuzman, Iraj Saniee
We show that a collection of Gaussian mixture models (GMMs) in $R^{n}$ can be optimally classified using $O(n)$ neurons in a neural network with two hidden layers (deep neural network), whereas in contrast, a neural network with a single hidden layer (shallow neural network) would require at least $O(\exp(n))$ neurons or possibly exponentially large coefficients.
no code implementations • 19 Dec 2017 • Dan Kushnir, Shirin Jalali, Iraj Saniee
Consequently, the expected overall running time of the algorithm is linear in $n$ and quasi-linear in $p$ at $o(\ln{p})O(np)$, and the sample complexity is independent of $p$.