no code implementations • 14 Dec 2023 • Mohammad Samragh, Mehrdad Farajtabar, Sachin Mehta, Raviteja Vemulapalli, Fartash Faghri, Devang Naik, Oncel Tuzel, Mohammad Rastegari
The usual practice of transfer learning overcomes this challenge by initializing the model with weights of a pretrained model of the same size and specification to increase the convergence and training speed.
no code implementations • 31 Aug 2023 • Alexandre Bittar, Paul Dixon, Mohammad Samragh, Kumari Nishu, Devang Naik
Using a vision-inspired keyword spotting framework, we propose an architecture with input-dependent dynamic depth capable of processing streaming audio.
no code implementations • 24 Oct 2022 • Mohammad Samragh, Arnav Kundu, Ting-yao Hu, Minsik Cho, Aman Chadha, Ashish Shrivastava, Oncel Tuzel, Devang Naik
This paper explores the possibility of using visual object detection techniques for word localization in speech data.
no code implementations • 7 Sep 2021 • Greg Fields, Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar, Tara Javidi
Deep neural networks have been shown to be vulnerable to backdoor, or trojan, attacks where an adversary has embedded a trigger in the network at training time such that the model correctly classifies all standard inputs, but generates a targeted, incorrect classification on any input which contains the trigger.
no code implementations • 23 Apr 2021 • Mohammad Samragh, Hossein Hosseini, Aleksei Triastcyn, Kambiz Azarian, Joseph Soriaga, Farinaz Koushanfar
In our method, the edge device runs the model up to a split layer determined based on its computational capacity.
no code implementations • 1 Jan 2021 • Mohammad Samragh, Hossein Hosseini, Kambiz Azarian, Joseph Soriaga
Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks.
no code implementations • 4 Sep 2020 • Mojan Javaheripi, Mohammad Samragh, Gregory Fields, Tara Javidi, Farinaz Koushanfar
We propose CLEANN, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications.
no code implementations • 8 Apr 2020 • Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar
In the contemporary big data realm, Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy.
no code implementations • 15 Nov 2019 • Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar
This paper introduces ASCAI, a novel adaptive sampling methodology that can learn how to effectively compress Deep Neural Networks (DNNs) for accelerated inference on resource-constrained platforms.
no code implementations • 17 Jan 2019 • Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar
CodeX incorporates nonlinear encoding to the computation flow of neural networks to save memory.
no code implementations • 15 Jun 2018 • Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, Tajana Rosing
To enable in-memory processing, RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations, i. e., multiplication, addition, activation functions, and pooling.
no code implementations • ICLR 2018 • Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar
Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency.
no code implementations • ICLR 2018 • Bita Darvish Rouhani, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar
We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model.
1 code implementation • 3 Nov 2017 • Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar
We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity.
no code implementations • 8 Sep 2017 • Bita Darvish Rouhani, Mohammad Samragh, Mojan Javaheripi, Tara Javidi, Farinaz Koushanfar
Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems.