no code implementations • 6 May 2022 • Hang Zhang, Afshin Abdi, Faramarz Fekri
For the first time, we show that the correct graphical structure can be correctly recovered under the indefinite sensing system ($d < p$) using insufficient samples ($n < p$).
no code implementations • 11 Apr 2022 • Hang Zhang, Afshin Abdi, Faramarz Fekri
This paper proposes a general framework to design a sparse sensing matrix $\ensuremath{\mathbf{A}}\in \mathbb{R}^{m\times n}$, in a linear measurement system $\ensuremath{\mathbf{y}} = \ensuremath{\mathbf{Ax}}^{\natural} + \ensuremath{\mathbf{w}}$, where $\ensuremath{\mathbf{y}} \in \mathbb{R}^m$, $\ensuremath{\mathbf{x}}^{\natural}\in \RR^n$, and $\ensuremath{\mathbf{w}}$ denote the measurements, the signal with certain structures, and the measurement noise, respectively.
no code implementations • 24 Jan 2022 • Yashas Malur Saidutta, Afshin Abdi, Faramarz Fekri
IoT devices generating enormous data and state-of-the-art machine learning techniques together will revolutionize cyber-physical systems.
no code implementations • 19 Aug 2020 • Afshin Abdi, Saeed Rashidi, Faramarz Fekri, Tushar Krishna
In this paper, we consider the parallel implementation of an already-trained deep model on multiple processing nodes (a. k. a.
no code implementations • ICLR 2019 • Afshin Abdi, Faramarz Fekri
In distributed training, the communication cost due to the transmission of gradients or the parameters of the deep model is a major bottleneck in scaling up the number of processing nodes.
1 code implementation • 17 Jun 2018 • Alireza Aghasi, Afshin Abdi, Justin Romberg
We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network.
1 code implementation • NeurIPS 2017 • Alireza Aghasi, Afshin Abdi, Nam Nguyen, Justin Romberg
This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model.