no code implementations • 16 Feb 2024 • Ira Globus-Harris, Declan Harrison, Michael Kearns, Pietro Perona, Aaron Roth
There, unlike in classical crowdsourced ML, participants deliberately specialize their efforts by working on subproblems, such as demographic subgroups in the service of fairness.
1 code implementation • 31 Jan 2023 • Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, Jessica Sorrell
Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class H that makes use only of a standard squared error regression oracle for H. We give a weak learning assumption on H that ensures convergence to Bayes optimality without the need to make any realizability assumptions -- giving us an agnostic boosting algorithm for regression.