no code implementations • NeurIPS 2016 • Finnian Lattimore, Tor Lattimore, Mark D. Reid
We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment.
no code implementations • 9 Feb 2016 • Nicolás Della Penna, Mark D. Reid, David Balduzzi
Motivated by clinical trials, we study bandits with observable non-compliance.
no code implementations • NeurIPS 2015 • Rafael Frongillo, Mark D. Reid
However, little is known about rates and guarantees for the convergence of these sequential mechanisms, and two recent papers cite this as an important open question. In this paper we show how some previously studied prediction market trading models can be understood as a natural generalization of randomized coordinate descent which we call randomized subspace descent (RSD).
no code implementations • 9 Jul 2015 • Tim van Erven, Peter D. Grünwald, Nishant A. Mehta, Mark D. Reid, Robert C. Williamson
For bounded losses, we show how the central condition enables a direct proof of fast rates and we prove its equivalence to the Bernstein condition, itself a generalization of the Tsybakov margin condition, both of which have played a central role in obtaining fast rates in statistical learning.
no code implementations • 1 Oct 2014 • Rafael M. Frongillo, Mark D. Reid
We introduce a new framework to model interactions among agents which seek to trade to minimize their risk with respect to some future outcome.
no code implementations • 24 Jun 2014 • Mark D. Reid, Rafael M. Frongillo, Robert C. Williamson, Nishant Mehta
Mixability is a property of a loss which characterizes when fast convergence is possible in the game of prediction with expert advice.
no code implementations • 10 Mar 2014 • Mark D. Reid, Rafael M. Frongillo, Robert C. Williamson
Mixability of a loss is known to characterise when constant regret bounds are achievable in games of prediction with expert advice through the use of Vovk's aggregating algorithm.
no code implementations • NeurIPS 2012 • Tim V. Erven, Peter Grünwald, Mark D. Reid, Robert C. Williamson
We show that, in the special case of log-loss, stochastic mixability reduces to a well-known (but usually unnamed) martingale condition, which is used in existing convergence theorems for minimum description length and Bayesian inference.
no code implementations • NeurIPS 2012 • Rafael M. Frongillo, Nicholás Della Penna, Mark D. Reid
We strengthen recent connections between prediction markets and learning by showing that a natural class of market makers can be understood as performing stochastic mirror descent when trader demands are sequentially drawn from a fixed distribution.
no code implementations • NeurIPS 2011 • Elodie Vernet, Mark D. Reid, Robert C. Williamson
We also show that the integral representation for binary proper losses can not be extended to multiclass losses.
no code implementations • 1 Dec 2011 • Nicolas Della Penna, Mark D. Reid
We introduce a modular framework for market making.