no code implementations • 5 Feb 2024 • Matthew DeLorenzo, Animesh Basak Chowdhury, Vasudev Gohil, Shailja Thakur, Ramesh Karri, Siddharth Garg, Jeyavijayan Rajendran
Existing large language models (LLMs) for register transfer level code generation face challenges like compilation failures and suboptimal power, performance, and area (PPA) efficiency.
no code implementations • 16 Oct 2023 • Animesh Basak Chowdhury, Shailja Thakur, Hammond Pearce, Ramesh Karri, Siddharth Garg
Here we describe our experience curating two large-scale, high-quality datasets for Verilog code generation and logic synthesis.
no code implementations • 8 Oct 2023 • Akshaj Kumar Veldanda, Fabian Grob, Shailja Thakur, Hammond Pearce, Benjamin Tan, Ramesh Karri, Siddharth Garg
We replicate this experiment on state-of-art LLMs (GPT-3. 5, Bard, Claude and Llama) to evaluate bias (or lack thereof) on gender, race, maternity status, pregnancy status, and political affiliation.
no code implementations • 28 Jul 2023 • Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, Siddharth Garg
In this study, we explore the capability of Large Language Models (LLMs) to automate hardware design by generating high-quality Verilog code, a common language for designing and modeling digital systems.
no code implementations • 24 Jun 2023 • Rahul Kande, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Shailja Thakur, Ramesh Karri, Jeyavijayan Rajendran
As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities.
no code implementations • 23 Dec 2022 • Shailja Thakur
The lack of any sender authentication mechanism in place makes CAN (Controller Area Network) vulnerable to security threats.
1 code implementation • 13 Dec 2022 • Shailja Thakur, Baleegh Ahmad, Zhenxing Fan, Hammond Pearce, Benjamin Tan, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg
Automating hardware design could obviate a significant amount of human error from the engineering process and lead to fewer errors.
no code implementations • 16 Jun 2020 • Shailja Thakur, Sebastian Fischmeister
To fully exploit the capabilities of complex neural networks, we propose a non-intrusive interpretability technique that uses the input and output of the model to generate a saliency map.