no code implementations • 13 Jun 2023 • Omar Montasser
In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples and develop an understanding of how to algorithmically guarantee them.
no code implementations • NeurIPS 2023 • Han Shao, Avrim Blum, Omar Montasser
Ball manipulations are a widely studied class of manipulations in the literature, where agents can modify their feature vector within a bounded radius ball.
no code implementations • 15 Mar 2023 • Saba Ahmadi, Avrim Blum, Omar Montasser, Kevin Stangl
A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example.
no code implementations • 15 Sep 2022 • Omar Montasser, Steve Hanneke, Nathan Srebro
We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time.
no code implementations • 15 Feb 2022 • Han Shao, Omar Montasser, Avrim Blum
One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal.
no code implementations • 11 Feb 2022 • Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang
We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners.
no code implementations • 20 Oct 2021 • Omar Montasser, Steve Hanneke, Nathan Srebro
We study the problem of adversarially robust learning in the transductive setting.
no code implementations • 3 Feb 2021 • Omar Montasser, Steve Hanneke, Nathan Srebro
We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions.
no code implementations • NeurIPS 2020 • Omar Montasser, Steve Hanneke, Nathan Srebro
We study the problem of reducing adversarially robust learning to standard PAC learning, i. e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner.
no code implementations • NeurIPS 2020 • Shafi Goldwasser, Adam Tauman Kalai, Yael Tauman Kalai, Omar Montasser
We present a transductive learning algorithm that takes as input training examples from a distribution $P$ and arbitrary (unlabeled) test examples, possibly chosen by an adversary.
no code implementations • ICML 2020 • Omar Montasser, Surbhi Goel, Ilias Diakonikolas, Nathan Srebro
We study the problem of learning adversarially robust halfspaces in the distribution-independent setting.
no code implementations • 9 Mar 2020 • Pritish Kamath, Omar Montasser, Nathan Srebro
We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to approximate, rather then exactly represent, a given hypothesis class.
no code implementations • 12 Feb 2019 • Omar Montasser, Steve Hanneke, Nathan Srebro
We study the question of learning an adversarially robust predictor.
no code implementations • 22 Jan 2017 • Omar Montasser, Daniel Kifer
For the task of predicting gender and race/ethnicity counts at the blockgroup-level, an approach adapted from prior work to our problem achieves an average correlation of 0. 389 (gender) and 0. 569 (race) on a held-out test dataset.