no code implementations • 9 Feb 2022 • Tom F. Sterkenburg, Peter D. Grünwald
The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification.
no code implementations • 21 Oct 2017 • Peter D. Grünwald, Nishant A. Mehta
Our first main result bounds excess risk in terms of the new complexity.
no code implementations • 1 May 2016 • Peter D. Grünwald, Nishant A. Mehta
For general loss functions, our bounds rely on two separate conditions: the $v$-GRIP (generalized reversed information projection) conditions, which control the lower tail of the excess loss; and the newly introduced witness condition, which controls the upper tail.
no code implementations • 9 Jul 2015 • Tim van Erven, Peter D. Grünwald, Nishant A. Mehta, Mark D. Reid, Robert C. Williamson
For bounded losses, we show how the central condition enables a direct proof of fast rates and we prove its equivalence to the Bernstein condition, itself a generalization of the Tsybakov margin condition, both of which have played a central role in obtaining fast rates in statistical learning.