no code implementations • 24 Apr 2024 • Fabrizio Carpi, Soheil Rostami, Joonyoung Cho, Siddharth Garg, Elza Erkip, Charlie Jianzhong Zhang
High peak-to-average power ratio (PAPR) is one of the main factors limiting cell coverage for cellular systems, especially in the uplink direction.
1 code implementation • 26 Feb 2024 • Georg Pichler, Marco Romanelli, Divya Prakash Manivannan, Prashanth Krishnamurthy, Farshad Khorrami, Siddharth Garg
We introduce a formal statistical definition for the problem of backdoor detection in machine learning systems and use it to analyze the feasibility of such problems, providing evidence for the utility and applicability of our definition.
no code implementations • 5 Feb 2024 • Matthew DeLorenzo, Animesh Basak Chowdhury, Vasudev Gohil, Shailja Thakur, Ramesh Karri, Siddharth Garg, Jeyavijayan Rajendran
Existing large language models (LLMs) for register transfer level code generation face challenges like compilation failures and suboptimal power, performance, and area (PPA) efficiency.
no code implementations • 25 Jan 2024 • Patricia Pauli, Aaron Havens, Alexandre Araujo, Siddharth Garg, Farshad Khorrami, Frank Allgöwer, Bin Hu
However, a direct application of LipSDP to the resultant residual ReLU networks is conservative and even fails in recovering the well-known fact that the MaxMin activation is 1-Lipschitz.
no code implementations • 22 Jan 2024 • Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, Siddharth Garg
Logic synthesis, a pivotal stage in chip design, entails optimizing chip specifications encoded in hardware description languages like Verilog into highly efficient implementations using Boolean logic gates.
1 code implementation • 27 Oct 2023 • Sara Ghazanfari, Alexandre Araujo, Prashanth Krishnamurthy, Farshad Khorrami, Siddharth Garg
On the other hand, as perceptual metrics rely on neural networks, there is a growing concern regarding their resilience, given the established vulnerability of neural networks to adversarial attacks.
no code implementations • 16 Oct 2023 • Animesh Basak Chowdhury, Shailja Thakur, Hammond Pearce, Ramesh Karri, Siddharth Garg
Here we describe our experience curating two large-scale, high-quality datasets for Verilog code generation and logic synthesis.
no code implementations • 8 Oct 2023 • Akshaj Kumar Veldanda, Fabian Grob, Shailja Thakur, Hammond Pearce, Benjamin Tan, Ramesh Karri, Siddharth Garg
We replicate this experiment on state-of-art LLMs (GPT-3. 5, Bard, Claude and Llama) to evaluate bias (or lack thereof) on gender, race, maternity status, pregnancy status, and political affiliation.
1 code implementation • 6 Oct 2023 • Naren Dhyani, Jianqiao Mo, Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde
The Vision Transformer (ViT) architecture has emerged as the backbone of choice for state-of-the-art deep models for computer vision applications.
no code implementations • 28 Jul 2023 • Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, Siddharth Garg
In this study, we explore the capability of Large Language Models (LLMs) to automate hardware design by generating high-quality Verilog code, a common language for designing and modeling digital systems.
1 code implementation • 27 Jul 2023 • Sara Ghazanfari, Siddharth Garg, Prashanth Krishnamurthy, Farshad Khorrami, Alexandre Araujo
In this paper, we propose the Robust Learned Perceptual Image Patch Similarity (R-LPIPS) metric, a new metric that leverages adversarially trained deep features.
1 code implementation • 11 Jul 2023 • Hao Fu, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
Having the computed five metrics, five novelty detectors are trained from the validation dataset.
no code implementations • 16 Jun 2023 • Othmane Laousy, Alexandre Araujo, Guillaume Chassagnon, Marie-Pierre Revel, Siddharth Garg, Farshad Khorrami, Maria Vakalopoulou
The robustness of image segmentation has been an important research topic in the past few years as segmentation models have reached production-level accuracy.
no code implementations • 22 May 2023 • Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, Siddharth Garg
%Compared to prior work, INVICTUS is the first solution that uses a mix of RL and search methods joint with an online out-of-distribution detector to generate synthesis recipes over a wide range of benchmarks.
1 code implementation • 22 May 2023 • Jason Blocklove, Siddharth Garg, Ramesh Karri, Hammond Pearce
Modern hardware design starts with specifications provided in natural language.
no code implementations • 28 Apr 2023 • Pulak Mehta, Gauri Jagatap, Kevin Gallagher, Brian Timmerman, Progga Deb, Siddharth Garg, Rachel Greenstadt, Brendan Dolan-Gavitt
We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners
no code implementations • 6 Mar 2023 • Animesh Basak Chowdhury, Lilas Alrahis, Luca Collini, Johann Knechtel, Ramesh Karri, Siddharth Garg, Ozgur Sinanoglu, Benjamin Tan
Oracle-less machine learning (ML) attacks have broken various logic locking schemes.
no code implementations • 22 Feb 2023 • Fabrizio Carpi, Sivarama Venkatesan, Jinfeng Du, Harish Viswanathan, Siddharth Garg, Elza Erkip
Downlink massive multiple-input multiple-output (MIMO) precoding algorithms in frequency division duplexing (FDD) systems rely on accurate channel state information (CSI) feedback from users.
no code implementations • 4 Feb 2023 • Federica Granese, Marco Romanelli, Siddharth Garg, Pablo Piantanida
Multi-armed adversarial attacks, in which multiple algorithms and objective loss functions are simultaneously used at evaluation time, have been shown to be highly successful in fooling state-of-the-art adversarial examples detectors while requiring no specific side information about the detection mechanism.
no code implementations • 2 Feb 2023 • Akshaj Kumar Veldanda, Ivan Brugere, Sanghamitra Dutta, Alan Mishler, Siddharth Garg
Recent work has sought to train fair models without sensitive attributes on training data.
no code implementations • 16 Dec 2022 • Hao Fu, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
In domain shift analysis, we propose a theorem based on our bound.
1 code implementation • 13 Dec 2022 • Shailja Thakur, Baleegh Ahmad, Zhenxing Fan, Hammond Pearce, Benjamin Tan, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg
Automating hardware design could obviate a significant amount of human error from the engineering process and lead to fewer errors.
no code implementations • 13 Dec 2022 • Alireza Sarmadi, Hao Fu, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
As a baseline, in Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train models by sharing raw data.
no code implementations • 29 Jun 2022 • Akshaj Kumar Veldanda, Ivan Brugere, Jiahao Chen, Sanghamitra Dutta, Alan Mishler, Siddharth Garg
We further show that MinDiff optimization is very sensitive to choice of batch size in the under-parameterized regime.
no code implementations • 26 May 2022 • Kang Liu, Di wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg
To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks.
no code implementations • 15 Apr 2022 • Zhongzheng Yuan, Samyak Rawlekar, Siddharth Garg, Elza Erkip, Yao Wang
In this work, we consider a "split computation" system to offload a part of the computation of the YOLO object detection model.
1 code implementation • 5 Apr 2022 • Animesh Basak Chowdhury, Benjamin Tan, Ryan Carey, Tushit Jain, Ramesh Karri, Siddharth Garg
Generating sub-optimal synthesis transformation sequences ("synthesis recipe") is an important problem in logic synthesis.
1 code implementation • 4 Feb 2022 • Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde
To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.
1 code implementation • 21 Oct 2021 • Animesh Basak Chowdhury, Benjamin Tan, Ramesh Karri, Siddharth Garg
Logic synthesis is a challenging and widely-researched combinatorial optimization problem during integrated circuit (IC) design.
no code implementations • 17 Jun 2021 • Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde
The emergence of deep learning has been accompanied by privacy concerns surrounding users' data and service providers' models.
no code implementations • NeurIPS 2021 • Zahra Ghodsi, Nandan Kumar Jha, Brandon Reagen, Siddharth Garg
In this paper we re-think the ReLU computation and propose optimizations for PI tailored to properties of neural networks.
no code implementations • 12 Mar 2021 • Zahra Ghodsi, Siva Kumar Sastry Hari, Iuri Frosio, Timothy Tsai, Alejandro Troccoli, Stephen W. Keckler, Siddharth Garg, Anima Anandkumar
Extracting interesting scenarios from real-world data as well as generating failure cases is important for the development and testing of autonomous systems.
no code implementations • 2 Mar 2021 • Nandan Kumar Jha, Zahra Ghodsi, Siddharth Garg, Brandon Reagen
This paper proposes DeepReDuce: a set of optimizations for the judicious removal of ReLUs to reduce private inference latency.
no code implementations • 8 Nov 2020 • Naman Patel, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
We show that by controlling parts of a physical environment in which a pre-trained deep neural network (DNN) is being fine-tuned online, an adversary can launch subtle data poisoning attacks that degrade the performance of the system.
no code implementations • 4 Nov 2020 • Hao Fu, Akshaj Kumar Veldanda, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
This paper proposes a new defense against neural network backdooring attacks that are maliciously trained to mispredict in the presence of attacker-chosen triggers.
no code implementations • 23 Oct 2020 • Akshaj Veldanda, Siddharth Garg
Deep neural networks (DNNs) demonstrate superior performance in various fields, including scrutiny and security.
no code implementations • 19 Sep 2020 • Kang Liu, Benjamin Tan, Siddharth Garg
Unprecedented data collection and sharing have exacerbated privacy concerns and led to increasing interest in privacy-preserving tools that remove sensitive attributes from images while maintaining useful information for other tasks.
Facial Expression Recognition Facial Expression Recognition (FER) +1
no code implementations • ICML Workshop AML 2021 • Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, Chinmay Hegde
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.
no code implementations • NeurIPS 2020 • Zahra Ghodsi, Akshaj Veldanda, Brandon Reagen, Siddharth Garg
Machine learning as a service has given raise to privacy concerns surrounding clients' data and providers' models and has catalyzed research in private inference (PI): methods to process inferences without disclosing inputs.
no code implementations • 26 Apr 2020 • Kang Liu, Benjamin Tan, Gaurav Rajavendra Reddy, Siddharth Garg, Yiorgos Makris, Ramesh Karri
Deep learning (DL) offers potential improvements throughout the CAD tool-flow, one promising application being lithographic hotspot detection.
1 code implementation • 19 Feb 2020 • Akshaj Kumar Veldanda, Kang Liu, Benjamin Tan, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg
This paper proposes a novel two-stage defense (NNoculation) against backdoored neural networks (BadNets) that, repairs a BadNet both pre-deployment and online in response to backdoored test inputs encountered in the field.
no code implementations • 25 Jun 2019 • Kang Liu, Hao-Yu Yang, Yuzhe ma, Benjamin Tan, Bei Yu, Evangeline F. Y. Young, Ramesh Karri, Siddharth Garg
There is substantial interest in the use of machine learning (ML) based techniques throughout the electronic computer-aided design (CAD) flow, particularly those based on deep learning.
no code implementations • 2 Jul 2018 • Jeff Zhang, Siddharth Garg
FATE proposes two novel ideas: (i) DelayNet, a DNN based timing model for MAC units; and (ii) a statistical sampling methodology that reduces the number of MAC operations for which timing simulations are performed.
3 code implementations • 30 May 2018 • Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg
Our work provides the first step toward defenses against backdoor attacks in deep neural networks.
no code implementations • 11 Feb 2018 • Jeff Zhang, Kartheek Rangineni, Zahra Ghodsi, Siddharth Garg
Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference.
no code implementations • 11 Feb 2018 • Jeff Zhang, Tianyu Gu, Kanad Basu, Siddharth Garg
Due to their growing popularity and computational cost, deep neural networks (DNNs) are being targeted for hardware acceleration.
10 code implementations • 22 Aug 2017 • Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg
These results demonstrate that backdoors in neural networks are both powerful and---because the behavior of neural networks is difficult to explicate---stealthy.
no code implementations • NeurIPS 2017 • Zahra Ghodsi, Tianyu Gu, Siddharth Garg
Specifically, SafetyNets develops and implements a specialized interactive proof (IP) protocol for verifiable execution of a class of deep neural networks, i. e., those that can be represented as arithmetic circuits.