Search Results for author: Bharat Runwal

Found 5 papers, 5 papers with code

SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning

1 code implementation28 Apr 2024 Jinghan Jia, Yihua Zhang, Yimeng Zhang, Jiancheng Liu, Bharat Runwal, James Diffenderfer, Bhavya Kailkhura, Sijia Liu

Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices.

Stochastic Optimization

From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers

1 code implementation2 Feb 2024 Bharat Runwal, Tejaswini Pedapati, Pin-Yu Chen

Building upon this insight, in this work, we propose a novel density loss that encourages higher activation sparsity (equivalently, lower activation density) in the pre-trained models.

Uncovering the Hidden Cost of Model Compression

1 code implementation29 Aug 2023 Diganta Misra, Muawiz Chaudhary, Agam Goyal, Bharat Runwal, Pin Yu Chen

This empirical investigation underscores the need for a nuanced understanding beyond mere accuracy in sparse and quantized settings, thereby paving the way for further exploration in Visual Prompting techniques tailored for sparse and quantized models.

Model Compression Quantization +2

Robust Graph Neural Networks using Weighted Graph Laplacian

1 code implementation3 Aug 2022 Bharat Runwal, Vivek, Sandeep Kumar

For demonstration, the experiments are conducted with Graph convolutional neural network(GCNN) architecture, however, the proposed framework is easily amenable to any existing GNN architecture.

Computational Efficiency

APP: Anytime Progressive Pruning

1 code implementation4 Apr 2022 Diganta Misra, Bharat Runwal, Tianlong Chen, Zhangyang Wang, Irina Rish

With the latest advances in deep learning, there has been a lot of focus on the online learning paradigm due to its relevance in practical settings.

Network Pruning Sparse Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.