no code implementations • 14 Feb 2024 • Laura Niss, Kevin Vogt-Lowell, Theodoros Tsiligkaridis
Foundations models are presented as generalists that often perform well over a myriad of tasks.
no code implementations • 13 Apr 2022 • Laura Niss, Yuekai Sun, Ambuj Tewari
Sampling biases in training data are a major source of algorithmic biases in machine learning systems.
no code implementations • 12 Oct 2019 • Laura Niss, Ambuj Tewari
We define the $\varepsilon$-contaminated stochastic bandit problem and use our robust mean estimators to give two variants of a robust Upper Confidence Bound (UCB) algorithm, crUCB.
no code implementations • 3 Jul 2017 • Amanda Bower, Sarah N. Kitchen, Laura Niss, Martin J. Strauss, Alexander Vargas, Suresh Venkatasubramanian
This work facilitates ensuring fairness of machine learning in the real world by decoupling fairness considerations in compound decisions.