no code implementations • 14 Feb 2024 • Idan Attias, Gintare Karolina Dziugaite, Mahdi Haghifam, Roi Livni, Daniel M. Roy
In this work, we investigate the interplay between memorization and learning in the context of \emph{stochastic convex optimization} (SCO).
no code implementations • 4 Jul 2023 • Angelos Assos, Idan Attias, Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson
In this setting, we provide learning algorithms that only rely on best response oracles and converge to approximate-minimax equilibria in two-player zero-sum games and approximate coarse correlated equilibria in multi-player general-sum games, as long as the game has a bounded fat-threshold dimension.
no code implementations • 26 Jun 2022 • Idan Attias, Steve Hanneke
We study robustness to test-time adversarial attacks in the regression setting with $\ell_p$ losses and arbitrary perturbation sets.
no code implementations • 12 Feb 2022 • Eitan-Hai Mashiah, Idan Attias, Yishay Mansour
Following this, we show how to compute both the optimal pure and mixed strategies.
no code implementations • 11 Feb 2022 • Idan Attias, Steve Hanneke, Yishay Mansour
This shows that there is a significant benefit in semi-supervised robust learning even in the worst-case distribution-free model, and establishes a gap between the supervised and semi-supervised label complexities which is known not to hold in standard non-robust PAC learning.
no code implementations • 10 Oct 2021 • Idan Attias, Aryeh Kontorovich
We provide estimates on the fat-shattering dimension of aggregation rules of real-valued function classes.
no code implementations • 30 Jul 2021 • Idan Attias, Edith Cohen, Moshe Shechner, Uri Stemmer
Classical streaming algorithms operate under the (not always reasonable) assumption that the input stream is fixed in advance.
1 code implementation • 1 Apr 2021 • Matan Levi, Idan Attias, Aryeh Kontorovich
We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation that is both robust and domain invariant.
no code implementations • NeurIPS 2020 • Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour
We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption.
no code implementations • 4 Oct 2018 • Idan Attias, Aryeh Kontorovich, Yishay Mansour
For binary classification, the algorithm of Feige et al. (2015) uses a regret minimization algorithm and an ERM oracle as a black box; we adapt it for the multiclass and regression settings.
no code implementations • 3 Oct 2018 • Idan Attias, Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi
For the $\ell_2$ loss, does every function class admit an approximate compression scheme of polynomial size in the fat-shattering dimension?