Search Results for author: Shivani Agarwal

Found 19 papers, 4 papers with code

Multiclass Learning from Noisy Labels for Non-decomposable Performance Measures

1 code implementation1 Feb 2024 Mingyuan Zhang, Shivani Agarwal

Most work on learning from noisy labels has focused on standard loss-based performance measures.

Information Retrieval

Consistent Multiclass Algorithms for Complex Metrics and Constraints

1 code implementation18 Oct 2022 Harikrishna Narasimhan, Harish G. Ramaswamy, Shiv Kumar Tavker, Drona Khurana, Praneeth Netrapalli, Shivani Agarwal

We present consistent algorithms for multiclass learning with complex performance metrics and constraints, where the objective and constraints are defined by arbitrary functions of the confusion matrix.

Fairness

Bayes Consistency vs. H-Consistency: The Interplay between Surrogate Loss Functions and the Scoring Function Class

no code implementations NeurIPS 2020 Mingyuan Zhang, Shivani Agarwal

When H is the class of linear models, the class F consists of certain piecewise linear scoring functions that are characterized by the same number of parameters as in the linear case, and minimization over which can be performed using an adaptation of the min-pooling idea from neural network training.

Choice Bandits

no code implementations NeurIPS 2020 Arpit Agarwal, Nicholas Johnson, Shivani Agarwal

Here we study a natural generalization, that we term \emph{choice bandits}, where the learner plays a set of up to $k \geq 2$ arms and receives limited relative feedback in the form of a single multiway choice among the pulled arms, drawn from an underlying multiway choice model.

Convex Calibrated Surrogates for the Multi-Label F-Measure

no code implementations ICML 2020 Mingyuan Zhang, Harish G. Ramaswamy, Shivani Agarwal

In particular, the F-measure explicitly balances recall (fraction of active labels predicted to be active) and precision (fraction of labels predicted to be active that are actually so), both of which are important in evaluating the overall performance of a multi-label classifier.

Multi-Label Classification

Accelerated Spectral Ranking

1 code implementation ICML 2018 Arpit Agarwal, Prathamesh Patil, Shivani Agarwal

In this paper, we design a provably faster spectral ranking algorithm, which we call accelerated spectral ranking (ASR), that is also consistent under the MNL/BTL models.

Recommendation Systems

Dueling Bandits: Beyond Condorcet Winners to General Tournament Solutions

no code implementations NeurIPS 2016 Siddartha Y. Ramamohan, Arun Rajkumar, Shivani Agarwal

Recent work on deriving $O(\log T)$ anytime regret bounds for stochastic dueling bandit problems has considered mostly Condorcet winners, which do not always exist, and more recently, winners defined by the Copeland set, which do always exist.

Support Vector Algorithms for Optimizing the Partial Area Under the ROC Curve

no code implementations13 May 2016 Harikrishna Narasimhan, Shivani Agarwal

Increasingly, however, in several applications, ranging from ranking to biometric screening to medicine, performance is measured not in terms of the full area under the ROC curve, but in terms of the \emph{partial} area under the ROC curve between two false positive rates.

Combinatorial Optimization

Consistent Algorithms for Multiclass Classification with a Reject Option

no code implementations15 May 2015 Harish G. Ramaswamy, Ambuj Tewari, Shivani Agarwal

We consider the problem of $n$-class classification ($n\geq 2$), where the classifier can choose to abstain from making predictions at a given cost, say, a factor $\alpha$ of the cost of misclassification.

Classification General Classification

Consistent Classification Algorithms for Multi-class Non-Decomposable Performance Metrics

no code implementations1 Jan 2015 Harish G. Ramaswamy, Harikrishna Narasimhan, Shivani Agarwal

In this paper, we provide a unified framework for analysing a multi-class non-decomposable performance metric, where the problem of finding the optimal classifier for the performance metric is viewed as an optimization problem over the space of all confusion matrices achievable under the given distribution.

Classification General Classification +2

Online Decision-Making in General Combinatorial Spaces

no code implementations NeurIPS 2014 Arun Rajkumar, Shivani Agarwal

Here we study a general setting where costs may be linear in any suitable low-dimensional vector representation of elements of the decision space.

Decision Making

On the Statistical Consistency of Plug-in Classifiers for Non-decomposable Performance Measures

no code implementations NeurIPS 2014 Harikrishna Narasimhan, Rohit Vaish, Shivani Agarwal

In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable `estimate' of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably.

Retrieval Text Retrieval

Convex Calibration Dimension for Multiclass Loss Matrices

no code implementations12 Aug 2014 Harish G. Ramaswamy, Shivani Agarwal

We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be calibrated with respect to a loss matrix in this setting.

General Classification

On the Relationship Between Binary Classification, Bipartite Ranking, and Binary Class Probability Estimation

no code implementations NeurIPS 2013 Harikrishna Narasimhan, Shivani Agarwal

It is known that a good binary CPE model can be used to obtain a good binary classification model (by thresholding at 0. 5), and also to obtain a good bipartite ranking model (by using the CPE model directly as a ranking model); it is also known that a binary classification model does not necessarily yield a CPE model.

Binary Classification Classification +1

Convex Calibrated Surrogates for Low-Rank Loss Matrices with Applications to Subset Ranking Losses

no code implementations NeurIPS 2013 Harish G. Ramaswamy, Shivani Agarwal, Ambuj Tewari

The design of convex, calibrated surrogate losses, whose minimization entails consistency with respect to a desired target loss, is an important concept to have emerged in the theory of machine learning in recent years.

Classification Calibration Dimension for General Multiclass Losses

no code implementations NeurIPS 2012 Harish G. Ramaswamy, Shivani Agarwal

We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.