no code implementations • ICML 2020 • Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Ningshan Zhang
A general framework for online learning with partial information is one where feedback graphs specify which losses can be observed by the learner.
no code implementations • 15 Jun 2023 • Raef Bassily, Corinna Cortes, Anqi Mao, Mehryar Mohri
This is the modern problem of supervised domain adaptation from a public source to a private target domain.
no code implementations • 10 May 2023 • Pranjal Awasthi, Corinna Cortes, Mehryar Mohri
We show how these bounds can guide the design of learning algorithms that we discuss in detail.
no code implementations • NeurIPS 2021 • Corinna Cortes, Mehryar Mohri, Dmitry Storcheus, Ananda Theertha Suresh
We study the problem of learning accurate ensemble predictors, in particular boosting, in the presence of multiple source domains.
1 code implementation • 20 Sep 2021 • Corinna Cortes, Neil D. Lawrence
Further, with seven years passing since the experiment we find that for \emph{accepted} papers, there is no correlation between quality scores and impact of the paper as measured as a function of citation count.
no code implementations • NeurIPS 2020 • Corinna Cortes, Mehryar Mohri, Javier Gonzalvo, Dmitry Storcheus
We further implement the algorithm in a popular symbolic gradient computation framework and empirically demonstrate on a number of datasets the benefits of $\almo$ framework versus learning with a fixed mixture weights distribution.
no code implementations • 25 Aug 2020 • Corinna Cortes, Mehryar Mohri, Ananda Theertha Suresh, Ningshan Zhang
We present a new discriminative technique for the multiple-source adaptation, MSA, problem.
no code implementations • 21 Aug 2020 • Pranjal Awasthi, Corinna Cortes, Yishay Mansour, Mehryar Mohri
In the adversarial setting, we design efficient algorithms with competitive ratio guarantees.
no code implementations • 26 Jun 2020 • Corinna Cortes, Mehryar Mohri, Ananda Theertha Suresh
We present a series of new and more favorable margin-based learning guarantees that depend on the empirical margin loss of a predictor.
no code implementations • ICML 2020 • Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Ningshan Zhang
We present a new active learning algorithm that adaptively partitions the input space into a finite number of regions, and subsequently seeks a distinct predictor for each region, both phases actively requesting labels.
no code implementations • NeurIPS 2019 • Corinna Cortes, Mehryar Mohri, Dmitry Storcheus
We fill this gap by deriving data-dependent learning guarantees for \GB\ used with \emph{regularization}, expressed in terms of the Rademacher complexities of the constrained families of base predictors.
no code implementations • NeurIPS 2019 • Ben Adlam, Corinna Cortes, Mehryar Mohri, Ningshan Zhang
Generative adversarial networks (GANs) generate data based on minimizing a divergence between two distributions.
1 code implementation • 30 Apr 2019 • Charles Weill, Javier Gonzalvo, Vitaly Kuznetsov, Scott Yang, Scott Yak, Hanna Mazzawi, Eugen Hotaj, Ghassen Jerfel, Vladimir Macko, Ben Adlam, Mehryar Mohri, Corinna Cortes
AdaNet is a lightweight TensorFlow-based (Abadi et al., 2015) framework for automatically learning high-quality ensembles with minimal expert intervention.
no code implementations • NeurIPS 2018 • Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Dmitry Storcheus, Scott Yang
In this paper, we design efficient gradient computation algorithms for two broad families of structured prediction loss functions: rational and tropical losses.
no code implementations • 18 Apr 2018 • Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Holakou Rahmanian, Manfred K. Warmuth
We study the problem of online path learning with non-additive gains, which is a central problem appearing in several applications, including ensemble structured prediction.
no code implementations • 29 Oct 2017 • Corinna Cortes, Giulia Desalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang
We show that the notion of discrepancy can be used to design very general algorithms and a unified framework for the analysis of multi-armed rested bandit problems with non-stationary rewards.
no code implementations • ICML 2018 • Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Scott Yang
In the stochastic setting, we first point out a bias problem that limits the straightforward extension of algorithms such as UCB-N to time-varying feedback graphs, as needed in this context.
no code implementations • NeurIPS 2016 • Corinna Cortes, Giulia Desalvo, Mehryar Mohri
We present a new boosting algorithm for the key scenario of binary classification with abstention where the algorithm can abstain from predicting the label of a point, at the price of a fixed cost.
2 code implementations • ICML 2017 • Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang
We present new algorithms for adaptively learning artificial neural networks.
no code implementations • NeurIPS 2016 • Corinna Cortes, Mehryar Mohri, Vitaly Kuznetsov, Scott Yang
We give new data-dependent margin guarantees for structured prediction for a very wide family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition.
no code implementations • 14 Sep 2015 • Corinna Cortes, Prasoon Goyal, Vitaly Kuznetsov, Mehryar Mohri
This paper presents an algorithm, Voted Kernel Regularization , that provides the flexibility of using potentially very complex kernel functions such as predictors based on much higher-degree polynomial kernels, while benefitting from strong learning guarantees.
no code implementations • 7 May 2014 • Corinna Cortes, Mehryar Mohri, Andres Muñoz Medina
We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm previously shown to outperform a number of algorithms for this task.
no code implementations • NeurIPS 2013 • Corinna Cortes, Marius Kloft, Mehryar Mohri
We use the notion of local Rademacher complexity to design new algorithms for learning kernels.
no code implementations • 22 Oct 2013 • Corinna Cortes, Spencer Greenberg, Mehryar Mohri
We present an extensive analysis of relative deviation bounds, including detailed proofs of two-sided inequalities and their implications.
no code implementations • NeurIPS 2012 • Stephen Boyd, Corinna Cortes, Mehryar Mohri, Ana Radovanovic
We introduce a new notion of classification accuracy based on the top $\tau$-quantile values of a scoring function, a relevant criterion in a number of problems arising for search engines.
no code implementations • 2 Mar 2012 • Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh
Our theoretical results include a novel concentration bound for centered alignment between kernel matrices, the proof of the existence of effective predictors for kernels with high alignment, both for classification and for regression, and the proof of stability-based generalization bounds for a broad family of algorithms for learning kernels based on centered alignment.
no code implementations • NeurIPS 2010 • Corinna Cortes, Yishay Mansour, Mehryar Mohri
This paper presents an analysis of importance weighting for learning from finite samples and gives a series of theoretical and algorithmic results.
no code implementations • NeurIPS 2009 • Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Kunihiko Sadamasa, Yanjun Qi, Corinna Cortes, Mehryar Mohri
We present a class of nonlinear (polynomial) models that are discriminatively trained to directly map from the word content in a query-document or document-document pair to a ranking score.
no code implementations • NeurIPS 2009 • Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh
This paper studies the general problem of learning kernels based on a polynomial combination of base kernels.