Search Results for author: Adarsh Prasad

Found 16 papers, 0 papers with code

Heavy-tailed Streaming Statistical Estimation

no code implementations25 Aug 2021 Che-Ping Tsai, Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar

We consider the task of heavy-tailed statistical estimation given streaming $p$-dimensional samples.

regression Stochastic Optimization

On Proximal Policy Optimization's Heavy-tailed Gradients

no code implementations20 Feb 2021 Saurabh Garg, Joshua Zhanson, Emilio Parisotto, Adarsh Prasad, J. Zico Kolter, Zachary C. Lipton, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Pradeep Ravikumar

In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function.

Continuous Control

Efficient Estimators for Heavy-Tailed Machine Learning

no code implementations1 Jan 2021 Vishwak Srinivasan, Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Kumar Ravikumar

A dramatic improvement in data collection technologies has aided in procuring massive amounts of unstructured and heterogeneous datasets.

BIG-bench Machine Learning

On Learning Ising Models under Huber's Contamination Model

no code implementations NeurIPS 2020 Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, Pradeep Ravikumar

We study the problem of learning Ising models in a setting where some of the samples from the underlying distribution can be arbitrarily corrupted.

Robust Linear Regression: Optimal Rates in Polynomial Time

no code implementations29 Jun 2020 Ainesh Bakshi, Adarsh Prasad

We obtain robust and computationally efficient estimators for learning several linear models that achieve statistically optimal convergence rate under minimal distributional assumptions.

Open-Ended Question Answering regression

Learning Minimax Estimators via Online Learning

no code implementations19 Jun 2020 Kartik Gupta, Arun Sai Suggala, Adarsh Prasad, Praneeth Netrapalli, Pradeep Ravikumar

We view the problem of designing minimax estimators as finding a mixed strategy Nash equilibrium of a zero-sum game.

On Human-Aligned Risk Minimization

no code implementations NeurIPS 2019 Liu Leqi, Adarsh Prasad, Pradeep K. Ravikumar

The statistical decision theoretic foundations of modern machine learning have largely focused on the minimization of the expectation of some loss function for a given task.

Decision Making Fairness

A Unified Approach to Robust Mean Estimation

no code implementations1 Jul 2019 Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar

Building on this connection, we provide a simple variant of recent computationally-efficient algorithms for mean estimation in Huber's model, which given our connection entails that the same efficient sample-pruning based estimators is simultaneously robust to heavy-tailed noise and Huber contamination.

Connecting Optimization and Regularization Paths

no code implementations NeurIPS 2018 Arun Suggala, Adarsh Prasad, Pradeep K. Ravikumar

We study the implicit regularization properties of optimization techniques by explicitly connecting their optimization paths to the regularization paths of ``corresponding'' regularized problems.

Revisiting Adversarial Risk

no code implementations7 Jun 2018 Arun Sai Suggala, Adarsh Prasad, Vaishnavh Nagarajan, Pradeep Ravikumar

Based on the modified definition, we show that there is no trade-off between adversarial and standard accuracies; there exist classifiers that are robust and achieve high standard accuracy.

Image Classification

Robust Estimation via Robust Gradient Estimation

no code implementations19 Feb 2018 Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, Pradeep Ravikumar

We provide a new computationally-efficient class of estimators for risk minimization.

regression

On Separability of Loss Functions, and Revisiting Discriminative Vs Generative Models

no code implementations NeurIPS 2017 Adarsh Prasad, Alexandru Niculescu-Mizil, Pradeep K. Ravikumar

We revisit the classical analysis of generative vs discriminative models for general exponential families, and high-dimensional settings.

Fast Classification Rates for High-dimensional Gaussian Generative Models

no code implementations NeurIPS 2015 Tianyang Li, Adarsh Prasad, Pradeep K. Ravikumar

We consider the problem of binary classification when the covariates conditioned on the each of the response values follow multivariate Gaussian distributions.

Binary Classification Classification +3

Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets

no code implementations NeurIPS 2014 Adarsh Prasad, Stefanie Jegelka, Dhruv Batra

To cope with the high level of ambiguity faced in domains such as Computer Vision or Natural Language processing, robust prediction methods often search for a diverse set of high-quality candidate solutions or proposals.

Sentence Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.