Search Results for author: Anirudh Jain

Found 6 papers, 5 papers with code

Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs

no code implementations28 Apr 2023 George Pu, Anirudh Jain, Jihan Yin, Russell Kaplan

As foundation models continue to exponentially scale in size, efficient methods of adaptation become increasingly critical.

Confounding Tradeoffs for Neural Network Quantization

1 code implementation12 Feb 2021 Sahaj Garg, Anirudh Jain, Joe Lou, Mitchell Nahmias

Many neural network quantization techniques have been developed to decrease the computational and memory footprint of deep learning.

Quantization

Dynamic Precision Analog Computing for Neural Networks

1 code implementation12 Feb 2021 Sahaj Garg, Joe Lou, Anirudh Jain, Mitchell Nahmias

We propose extending analog computing architectures to support varying levels of precision by repeating operations and averaging the result, decreasing the impact of noise.

Cloud Removal in Satellite Images Using Spatiotemporal Generative Networks

3 code implementations14 Dec 2019 Vishnu Sarukkai, Anirudh Jain, Burak Uzkent, Stefano Ermon

In contrast, we cast the problem of cloud removal as a conditional image synthesis challenge, and we propose a trainable spatiotemporal generator network (STGAN) to remove clouds.

Cloud Removal Earth Observation +3

VizSeq: A Visual Analysis Toolkit for Text Generation Tasks

1 code implementation IJCNLP 2019 Changhan Wang, Anirudh Jain, Danlu Chen, Jiatao Gu

Automatic evaluation of text generation tasks (e. g. machine translation, text summarization, image captioning and video description) usually relies heavily on task-specific metrics, such as BLEU and ROUGE.

Benchmarking Image Captioning +5

Practical Deep Learning with Bayesian Principles

1 code implementation NeurIPS 2019 Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

Continual Learning Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.