Search Results for author: Bo Waggoner

Found 25 papers, 3 papers with code

Trading off Consistency and Dimensionality of Convex Surrogates for the Mode

no code implementations16 Feb 2024 Enrique Nueve, Bo Waggoner, Dhamma Kimpara, Jessie Finocchiaro

We investigate ways to trade off surrogate loss dimension, the number of problem instances, and restricting the region of consistency in the simplex for multiclass classification.

Hallucination Information Retrieval +1

Forecasting Competitions with Correlated Events

no code implementations24 Mar 2023 Rafael Frongillo, Manuel Lladser, Anish Thilagar, Bo Waggoner

We initiate the study of forecasting competitions for correlated events.

Proper losses for discrete generative models

no code implementations7 Nov 2022 Rafael Frongillo, Dhamma Kimpara, Bo Waggoner

The characterization rules out a loss whose expectation is the cross-entropy between the target distribution and the model.

An Embedding Framework for the Design and Analysis of Consistent Polyhedral Surrogates

no code implementations29 Jun 2022 Jessie Finocchiaro, Rafael M. Frongillo, Bo Waggoner

Using these results, we establish that indirect elicitation, a necessary condition for consistency, is also sufficient when working with polyhedral surrogates.

Structured Prediction

Surrogate Regret Bounds for Polyhedral Losses

no code implementations NeurIPS 2021 Rafael Frongillo, Bo Waggoner

Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset.

Unifying lower bounds on prediction dimension of convex surrogates

no code implementations NeurIPS 2021 Jessica Finocchiaro, Rafael Frongillo, Bo Waggoner

The convex consistency dimension of a supervised learning task is the lowest prediction dimension $d$ such that there exists a convex surrogate $L : \mathbb{R}^d \times \mathcal Y \to \mathbb R$ that is consistent for the given task.

Open-Ended Question Answering

Linear Functions to the Extended Reals

no code implementations18 Feb 2021 Bo Waggoner

This note investigates functions from $\mathbb{R}^d$ to $\mathbb{R} \cup \{\pm \infty\}$ that satisfy axioms of linearity wherever allowed by extended-value arithmetic.

Statistics Theory Computer Science and Game Theory Statistics Theory

Efficient Competitions and Online Learning with Strategic Forecasters

no code implementations16 Feb 2021 Rafael Frongillo, Robert Gomez, Anish Thilagar, Bo Waggoner

Winner-take-all competitions in forecasting and machine-learning suffer from distorted incentives.

Unifying Lower Bounds on Prediction Dimension of Consistent Convex Surrogates

no code implementations NeurIPS 2021 Jessie Finocchiaro, Rafael Frongillo, Bo Waggoner

Given a prediction task, understanding when one can and cannot design a consistent convex surrogate loss, particularly a low-dimensional one, is an important and active area of machine learning research.

Structured Prediction

Non-parametric Binary regression in metric spaces with KL loss

no code implementations19 Oct 2020 Ariel Avital, Klim Efremenko, Aryeh Kontorovich, David Toplin, Bo Waggoner

We propose a non-parametric variant of binary regression, where the hypothesis is regularized to be a Lipschitz function taking a metric space to [0, 1] and the loss is logarithmic.

Generalization Bounds regression

A Smoothed Analysis of Online Lasso for the Sparse Linear Contextual Bandit Problem

no code implementations16 Jul 2020 Zhiyuan Liu, Huazheng Wang, Bo Waggoner, Youjian, Liu, Lijun Chen

We investigate the sparse linear contextual bandit problem where the parameter $\theta$ is sparse.

Decentralized & Collaborative AI on Blockchain

1 code implementation16 Jul 2019 Justin D. Harris, Bo Waggoner

Machine learning has recently enabled large advances in artificial intelligence, but these tend to be highly centralized.

Recommendation Systems

Toward a Characterization of Loss Functions for Distribution Learning

no code implementations NeurIPS 2019 Nika Haghtalab, Cameron Musco, Bo Waggoner

We aim to understand this fact, taking an axiomatic approach to the design of loss functions for learning distributions.

Density Estimation

Equal Opportunity in Online Classification with Partial Feedback

1 code implementation NeurIPS 2019 Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu

We study an online classification problem with partial feedback in which individuals arrive one at a time from a fixed but unknown distribution, and must be classified as positive or negative.

Classification Decision Making Under Uncertainty +3

Multi-Observation Regression

no code implementations27 Feb 2018 Rafael Frongillo, Nishant A. Mehta, Tom Morgan, Bo Waggoner

Recent work introduced loss functions which measure the error of a prediction based on multiple simultaneous observations or outcomes.

regression

Local Differential Privacy for Evolving Data

no code implementations NeurIPS 2018 Matthew Joseph, Aaron Roth, Jonathan Ullman, Bo Waggoner

Moreover, existing techniques to mitigate this effect do not apply in the "local model" of differential privacy that these systems use.

Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM

no code implementations NeurIPS 2017 Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Steven Z. Wu

Traditional approaches to differential privacy assume a fixed privacy requirement ε for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.

Strategic Classification from Revealed Preferences

no code implementations22 Oct 2017 Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, Zhiwei Steven Wu

We study an online linear classification problem, in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome.

Classification General Classification

Multi-Observation Elicitation

no code implementations5 Jun 2017 Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan, Bo Waggoner

We study loss functions that measure the accuracy of a prediction based on multiple data points simultaneously.

BIG-bench Machine Learning

Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM

1 code implementation30 May 2017 Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Z. Steven Wu

Traditional approaches to differential privacy assume a fixed privacy requirement $\epsilon$ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.

A Market Framework for Eliciting Private Data

no code implementations NeurIPS 2015 Bo Waggoner, Rafael Frongillo, Jacob D. Abernethy

We propose a mechanism for purchasing information from a sequence of participants. The participants may simply hold data points they wish to sell, or may have more sophisticated information; either way, they are incentivized to participate as long as they believe their data points are representative or their information will improve the mechanism's future prediction on a test set. The mechanism, which draws on the principles of prediction markets, has a bounded budget and minimizes generalization error for Bregman divergence loss functions. We then show how to modify this mechanism to preserve the privacy of participants' information: At any given time, the current prices and predictions of the mechanism reveal almost no information about any one participant, yet in total over all participants, information is accurately aggregated.

Future prediction

Low-Cost Learning via Active Data Procurement

no code implementations20 Feb 2015 Jacob Abernethy, Yi-Ling Chen, Chien-Ju Ho, Bo Waggoner

Our results in a sense parallel classic sample complexity guarantees, but with the key resource being money rather than quantity of data: With a budget constraint $B$, we give robust risk (predictive error) bounds on the order of $1/\sqrt{B}$.

$\ell_p$ Testing and Learning of Discrete Distributions

no code implementations7 Dec 2014 Bo Waggoner

For $p > 1$, we can learn and test with a number of samples that is independent of the support size of the distribution: With an $\ell_p$ tolerance $\epsilon$, $O(\max\{ \sqrt{1/\epsilon^q}, 1/\epsilon^2 \})$ samples suffice for testing uniformity and $O(\max\{ 1/\epsilon^q, 1/\epsilon^2\})$ samples suffice for learning, where $q=p/(p-1)$ is the conjugate of $p$.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.