Search Results for author: Niraj K. Jha

Found 37 papers, 5 papers with code

PAGE: Domain-Incremental Adaptation with Past-Agnostic Generative Replay for Smart Healthcare

no code implementations13 Mar 2024 Chia-Hao Li, Niraj K. Jha

When adapting to a new domain, it exploits real data from the new distribution and the current model to generate synthetic data that retain the learned knowledge of previous domains.

Conformal Prediction Domain Adaptation

Neural Slot Interpreters: Grounding Object Semantics in Emergent Slot Representations

no code implementations2 Feb 2024 Bhishma Dedhia, Niraj K. Jha

Finally, we formulate the NSI program generator model to use the dense associations inferred from the alignment model to generate object-centric programs from slots.

Contrastive Learning Object +4

BREATHE: Second-Order Gradients and Heteroscedastic Emulation based Design Space Exploration

no code implementations16 Aug 2023 Shikhar Tuli, Niraj K. Jha

In graph-based search, BREATHE outperforms the next-best baseline, i. e., a graphical version of Gaussian-process-based Bayesian optimization, with up to 64. 9% higher performance.

Bayesian Optimization

Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers

no code implementations27 May 2023 Hongjie Wang, Bhishma Dedhia, Niraj K. Jha

Deployment of Transformer models on edge devices is becoming increasingly challenging due to the exponentially growing inference cost that scales quadratically with the number of tokens in the input sequence.

DOCTOR: A Multi-Disease Detection Continual Learning Framework Based on Wearable Medical Sensors

no code implementations9 May 2023 Chia-Hao Li, Niraj K. Jha

We demonstrate DOCTOR's efficacy in maintaining high disease classification accuracy with a single DNN model in various CL experiments.

Continual Learning Synthetic Data Generation

SECRETS: Subject-Efficient Clinical Randomized Controlled Trials using Synthetic Intervention

no code implementations8 May 2023 Sayeri Lala, Niraj K. Jha

The randomized controlled trial (RCT) is the gold standard for estimating the average treatment effect (ATE) of a medical intervention but requires 100s-1000s of subjects, making it expensive and difficult to implement.

counterfactual

TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference

no code implementations27 Mar 2023 Shikhar Tuli, Niraj K. Jha

To effectively execute this method on hardware for a diverse set of transformer architectures, we propose ELECTOR, a framework that simulates transformer inference and training on a design space of accelerators.

EdgeTran: Co-designing Transformers for Efficient Inference on Mobile Edge Platforms

no code implementations24 Mar 2023 Shikhar Tuli, Niraj K. Jha

In this work, we propose a framework, called ProTran, to profile the hardware performance measures for a design space of transformer architectures and a diverse set of edge devices.

AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with Transformers

1 code implementation28 Feb 2023 Shikhar Tuli, Niraj K. Jha

On the other hand, AccelTran-Server achieves 5. 73$\times$ higher throughput and 3. 69$\times$ lower energy consumption compared to the state-of-the-art transformer co-processor, Energon.

CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework

2 code implementations7 Dec 2022 Shikhar Tuli, Chia-Hao Li, Ritvik Sharma, Niraj K. Jha

AccelBench performs cycle-accurate simulations for a diverse set of accelerator architectures in a vast design space.

Benchmarking

CTRL: Clustering Training Losses for Label Error Detection

1 code implementation17 Aug 2022 Chang Yue, Niraj K. Jha

We propose a novel framework, called CTRL (Clustering TRaining Losses for label error detection), to detect label errors in multi-class datasets.

Clustering Label Error Detection

SCouT: Synthetic Counterfactuals via Spatiotemporal Transformers for Actionable Healthcare

no code implementations9 Jul 2022 Bhishma Dedhia, Roshini Balasubramanian, Niraj K. Jha

The Synthetic Control method has pioneered a class of powerful data-driven techniques to estimate the counterfactual reality of a unit from donor units.

counterfactual Decision Making

FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?

no code implementations23 May 2022 Shikhar Tuli, Bhishma Dedhia, Shreshth Tuli, Niraj K. Jha

We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture.

Graph Similarity Neural Architecture Search

Machine Learning Assisted Security Analysis of 5G-Network-Connected Systems

no code implementations7 Aug 2021 Tanujay Saha, Najwa Aaraj, Niraj K. Jha

The core network architecture of telecommunication systems has undergone a paradigm shift in the fifth-generation (5G)networks.

BIG-bench Machine Learning

GRAVITAS: Graphical Reticulated Attack Vectors for Internet-of-Things Aggregate Security

no code implementations31 May 2021 Jacob Brown, Tanujay Saha, Niraj K. Jha

Internet-of-Things (IoT) and cyber-physical systems (CPSs) may consist of thousands of devices connected in a complex network topology.

Management

Fast Design Space Exploration of Nonlinear Systems: Part I

no code implementations5 Apr 2021 Sanjai Narain, Emily Mak, Dana Chee, Brendan Englot, Kishore Pochiraju, Niraj K. Jha, Karthik Narayan

This paper presents a new method of solving the inverse design problem namely, given requirements or constraints on output, find an input that also optimizes an objective function.

Active Learning Bayesian Optimization +1

Fast Design Space Exploration of Nonlinear Systems: Part II

no code implementations5 Apr 2021 Prerit Terway, Kenza Hamidouche, Niraj K. Jha

In the second step, we use an inverse design to search over a continuous space and fine-tune the component values with the goal of improving the value of the objective function.

Active Learning

MHDeep: Mental Health Disorder Detection System based on Body-Area and Deep Neural Networks

no code implementations20 Feb 2021 Shayan Hassantabar, Joe Zhang, Hongxu Yin, Niraj K. Jha

At the patient level, MHDeep DNNs achieve an accuracy of 100%, 100%, and 90. 0% for the three mental health disorders, respectively.

Synthetic Data Generation

SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things and Cyber-Physical Systems based on Machine Learning

no code implementations7 Jan 2021 Tanujay Saha, Najwa Aaraj, Neel Ajjarapu, Niraj K. Jha

The novelty of this approach lies in extracting intelligence from known real-world CPS/IoT attacks, representing them in the form of regular expressions, and employing machine learning (ML) techniques on this ensemble of regular expressions to generate new attack vectors and security vulnerabilities.

Autonomous Vehicles

DISPATCH: Design Space Exploration of Cyber-Physical Systems

no code implementations21 Sep 2020 Prerit Terway, Kenza Hamidouche, Niraj K. Jha

In the second step, we use an inverse design to search over a continuous space to fine-tune the component values and meet the diverse set of system requirements.

Active Learning Bayesian Optimization +2

Fully Dynamic Inference with Deep Neural Networks

no code implementations29 Jul 2020 Wenhan Xia, Hongxu Yin, Xiaoliang Dai, Niraj K. Jha

Modern deep neural networks are powerful and widely applicable models that extract task-relevant information through multi-level abstraction.

Computational Efficiency Self-Driving Cars

Efficient Synthesis of Compact Deep Neural Networks

no code implementations18 Apr 2020 Wenhan Xia, Hongxu Yin, Niraj K. Jha

These large, deep models are often unsuitable for real-world applications, due to their massive computational cost, high memory bandwidth, and long latency.

Autonomous Driving

STEERAGE: Synthesis of Neural Networks Using Architecture Search and Grow-and-Prune Methods

no code implementations12 Dec 2019 Shayan Hassantabar, Xiaoliang Dai, Niraj K. Jha

On MNIST dataset, our CNN architecture achieves an error rate of 0. 66%, with 8. 6x fewer parameters compared to the LeNet-5 baseline.

Navigate

DiabDeep: Pervasive Diabetes Diagnosis based on Wearable Medical Sensors and Efficient Neural Networks

no code implementations11 Oct 2019 Hongxu Yin, Bilal Mukadam, Xiaoliang Dai, Niraj K. Jha

For server (edge) side inference, we achieve a 96. 3% (95. 3%) accuracy in classifying diabetics against healthy individuals, and a 95. 7% (94. 6%) accuracy in distinguishing among type-1/type-2 diabetic, and healthy individuals.

SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference

no code implementations2 Sep 2019 Ye Yu, Niraj K. Jha

To take advantage of sparsity, some accelerator designs explore sparsity encoding and evaluation on CNN accelerators.

Hardware Architecture

SECRET: Semantically Enhanced Classification of Real-world Tasks

no code implementations29 May 2019 Ayten Ozge Akmandor, Jorge Ortiz, Irene Manotas, Bongjun Ko, Niraj K. Jha

SECRET performs classifications by fusing the semantic information of the labels with the available data: it combines the feature space of the supervised algorithms with the semantic space of the NLP algorithms and predicts labels based on this joint space.

Classification General Classification

Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

no code implementations27 May 2019 Xiaoliang Dai, Hongxu Yin, Niraj K. Jha

Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applications.

Incremental Learning

SCANN: Synthesis of Compact and Accurate Neural Networks

no code implementations19 Apr 2019 Shayan Hassantabar, Zeyu Wang, Niraj K. Jha

To address these challenges, we propose a two-step neural network synthesis methodology, called DR+SCANN, that combines two complementary approaches to design compact and accurate DNNs.

Dimensionality Reduction Neural Network Compression +1

Hardware-Guided Symbiotic Training for Compact, Accurate, yet Execution-Efficient LSTM

no code implementations30 Jan 2019 Hongxu Yin, Guoyang Chen, Yingmin Li, Shuai Che, Weifeng Zhang, Niraj K. Jha

In this work, we propose a hardware-guided symbiotic training methodology for compact, accurate, yet execution-efficient inference models.

Language Modelling Neural Network Compression +2

ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation

1 code implementation CVPR 2019 Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Hongxu Yin, Fei Sun, Yanghan Wang, Marat Dukhan, Yunqing Hu, Yiming Wu, Yangqing Jia, Peter Vajda, Matt Uyttendaele, Niraj K. Jha

We formulate platform-aware NN architecture search in an optimization framework and propose a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and/or energy) predictors.

Bayesian Optimization Efficient Neural Network +1

Grow and Prune Compact, Fast, and Accurate LSTMs

no code implementations30 May 2018 Xiaoliang Dai, Hongxu Yin, Niraj K. Jha

To address these problems, we propose a hidden-layer LSTM (H-LSTM) that adds hidden layers to LSTM's original one level non-linear control gates.

Image Captioning speech-recognition +1

NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm

no code implementations6 Nov 2017 Xiaoliang Dai, Hongxu Yin, Niraj K. Jha

To address these problems, we introduce a network growth algorithm that complements network pruning to learn both weights and compact DNN architectures during training.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.