Search Results for author: Afshin Abdi

Found 7 papers, 2 papers with code

Structure Learning in Graphical Models from Indirect Observations

no code implementations6 May 2022 Hang Zhang, Afshin Abdi, Faramarz Fekri

For the first time, we show that the correct graphical structure can be correctly recovered under the indefinite sensing system ($d < p$) using insufficient samples ($n < p$).

A General Compressive Sensing Construct using Density Evolution

no code implementations11 Apr 2022 Hang Zhang, Afshin Abdi, Faramarz Fekri

This paper proposes a general framework to design a sparse sensing matrix $\ensuremath{\mathbf{A}}\in \mathbb{R}^{m\times n}$, in a linear measurement system $\ensuremath{\mathbf{y}} = \ensuremath{\mathbf{Ax}}^{\natural} + \ensuremath{\mathbf{w}}$, where $\ensuremath{\mathbf{y}} \in \mathbb{R}^m$, $\ensuremath{\mathbf{x}}^{\natural}\in \RR^n$, and $\ensuremath{\mathbf{w}}$ denote the measurements, the signal with certain structures, and the measurement noise, respectively.

Compressive Sensing

A Machine Learning Framework for Distributed Functional Compression over Wireless Channels in IoT

no code implementations24 Jan 2022 Yashas Malur Saidutta, Afshin Abdi, Faramarz Fekri

IoT devices generating enormous data and state-of-the-art machine learning techniques together will revolutionize cyber-physical systems.

Autonomous Driving BIG-bench Machine Learning +1

Restructuring, Pruning, and Adjustment of Deep Models for Parallel Distributed Inference

no code implementations19 Aug 2020 Afshin Abdi, Saeed Rashidi, Faramarz Fekri, Tushar Krishna

In this paper, we consider the parallel implementation of an already-trained deep model on multiple processing nodes (a. k. a.

Nested Dithered Quantization for Communication Reduction in Distributed Training

no code implementations ICLR 2019 Afshin Abdi, Faramarz Fekri

In distributed training, the communication cost due to the transmission of gradients or the parameters of the deep model is a major bottleneck in scaling up the number of processing nodes.

Quantization

Fast Convex Pruning of Deep Neural Networks

1 code implementation17 Jun 2018 Alireza Aghasi, Afshin Abdi, Justin Romberg

We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network.

Network Pruning

Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee

1 code implementation NeurIPS 2017 Alireza Aghasi, Afshin Abdi, Nam Nguyen, Justin Romberg

This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model.

Cannot find the paper you are looking for? You can Submit a new open access paper.