no code implementations • ICML 2020 • Prathamesh Patil, Arpit Agarwal, Shivani Agarwal, Sanjeev Khanna
In this paper, we initiate the study of robustness in rank aggregation under the popular Bradley-Terry-Luce (BTL) model for pairwise comparisons.
no code implementations • 8 Feb 2024 • Ritambhara Singh, Abhishek Jain, Pietro Perona, Shivani Agarwal, Junfeng Yang
We have rigorously tested our method using leading-edge semantic segmentation datasets.
1 code implementation • 1 Feb 2024 • Mingyuan Zhang, Shivani Agarwal
Most work on learning from noisy labels has focused on standard loss-based performance measures.
1 code implementation • 3 Jan 2024 • Akash Ghosh, Arkadeep Acharya, Prince Jha, Aniket Gaudgaul, Rajdeep Majumdar, Sriparna Saha, Aman Chadha, Raghav Jain, Setu Sinha, Shivani Agarwal
This work introduces the task of multimodal medical question summarization for codemixed input in a low-resource setting.
1 code implementation • 18 Oct 2022 • Harikrishna Narasimhan, Harish G. Ramaswamy, Shiv Kumar Tavker, Drona Khurana, Praneeth Netrapalli, Shivani Agarwal
We present consistent algorithms for multiclass learning with complex performance metrics and constraints, where the objective and constraints are defined by arbitrary functions of the confusion matrix.
no code implementations • NeurIPS 2020 • Mingyuan Zhang, Shivani Agarwal
When H is the class of linear models, the class F consists of certain piecewise linear scoring functions that are characterized by the same number of parameters as in the linear case, and minimization over which can be performed using an adaptation of the min-pooling idea from neural network training.
no code implementations • NeurIPS 2020 • Arpit Agarwal, Nicholas Johnson, Shivani Agarwal
Here we study a natural generalization, that we term \emph{choice bandits}, where the learner plays a set of up to $k \geq 2$ arms and receives limited relative feedback in the form of a single multiway choice among the pulled arms, drawn from an underlying multiway choice model.
no code implementations • ICML 2020 • Mingyuan Zhang, Harish G. Ramaswamy, Shivani Agarwal
In particular, the F-measure explicitly balances recall (fraction of active labels predicted to be active) and precision (fraction of labels predicted to be active that are actually so), both of which are important in evaluating the overall performance of a multi-label classifier.
1 code implementation • ICML 2018 • Arpit Agarwal, Prathamesh Patil, Shivani Agarwal
In this paper, we design a provably faster spectral ranking algorithm, which we call accelerated spectral ranking (ASR), that is also consistent under the MNL/BTL models.
no code implementations • NeurIPS 2016 • Siddartha Y. Ramamohan, Arun Rajkumar, Shivani Agarwal
Recent work on deriving $O(\log T)$ anytime regret bounds for stochastic dueling bandit problems has considered mostly Condorcet winners, which do not always exist, and more recently, winners defined by the Copeland set, which do always exist.
no code implementations • 13 May 2016 • Harikrishna Narasimhan, Shivani Agarwal
Increasingly, however, in several applications, ranging from ranking to biometric screening to medicine, performance is measured not in terms of the full area under the ROC curve, but in terms of the \emph{partial} area under the ROC curve between two false positive rates.
no code implementations • 15 May 2015 • Harish G. Ramaswamy, Ambuj Tewari, Shivani Agarwal
We consider the problem of $n$-class classification ($n\geq 2$), where the classifier can choose to abstain from making predictions at a given cost, say, a factor $\alpha$ of the cost of misclassification.
no code implementations • 1 Jan 2015 • Harish G. Ramaswamy, Harikrishna Narasimhan, Shivani Agarwal
In this paper, we provide a unified framework for analysing a multi-class non-decomposable performance metric, where the problem of finding the optimal classifier for the performance metric is viewed as an optimization problem over the space of all confusion matrices achievable under the given distribution.
no code implementations • NeurIPS 2014 • Arun Rajkumar, Shivani Agarwal
Here we study a general setting where costs may be linear in any suitable low-dimensional vector representation of elements of the decision space.
no code implementations • NeurIPS 2014 • Harikrishna Narasimhan, Rohit Vaish, Shivani Agarwal
In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable `estimate' of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably.
no code implementations • 12 Aug 2014 • Harish G. Ramaswamy, Shivani Agarwal
We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be calibrated with respect to a loss matrix in this setting.
no code implementations • NeurIPS 2013 • Harikrishna Narasimhan, Shivani Agarwal
It is known that a good binary CPE model can be used to obtain a good binary classification model (by thresholding at 0. 5), and also to obtain a good bipartite ranking model (by using the CPE model directly as a ranking model); it is also known that a binary classification model does not necessarily yield a CPE model.
no code implementations • NeurIPS 2013 • Harish G. Ramaswamy, Shivani Agarwal, Ambuj Tewari
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respect to a desired target loss, is an important concept to have emerged in the theory of machine learning in recent years.
no code implementations • NeurIPS 2012 • Harish G. Ramaswamy, Shivani Agarwal
We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting.