no code implementations • 5 Jun 2024 • Saba Ahmadi, Siddharth Bhandari, Avrim Blum, Chen Dan, Prabhav Jain
A major challenge in defending against adversarial attacks is the enormous space of possible attacks that even a simple adversary might perform.
1 code implementation • 24 May 2023 • Saba Ahmadi, Aishwarya Agrawal
Furthermore, we found that all metrics are sensitive to variations in the size of image-relevant objects mentioned in the caption, while CLIPScore and PAC-S are also sensitive to the number of mentions of image-relevant objects in the caption.
no code implementations • 15 Mar 2023 • Saba Ahmadi, Avrim Blum, Omar Montasser, Kevin Stangl
A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example.
no code implementations • 23 Feb 2023 • Saba Ahmadi, Avrim Blum, Kunhe Yang
For instance, whereas in the non-strategic case, a mistake bound of $\ln|H|$ is achievable via the halving algorithm when the target function belongs to a known class $H$, we show that no deterministic algorithm can achieve a mistake bound $o(\Delta)$ in the strategic setting, where $\Delta$ is the maximum degree of the manipulation graph (even when $|H|=O(\Delta)$).
1 code implementation • 13 Oct 2022 • Oscar Mañas, Pau Rodriguez, Saba Ahmadi, Aida Nematzadeh, Yash Goyal, Aishwarya Agrawal
Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks.
1 code implementation • 7 Jul 2022 • Saba Ahmadi, Pranjal Awasthi, Samir Khuller, Matthäus Kleindessner, Jamie Morgenstern, Pattara Sukprasert, Ali Vakilian
In this paper, we propose a natural notion of individual preference (IP) stability for clustering, which asks that every data point, on average, is closer to the points in its own cluster than to the points in any other cluster.
no code implementations • 28 Feb 2022 • Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partial-information learning setting.
no code implementations • 28 Feb 2022 • Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i. e., adding a new target level may decrease the total amount of improvement as it may get easier for some agents to improve.
no code implementations • 4 Aug 2020 • Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier.
no code implementations • 10 Feb 2020 • Saba Ahmadi, Sainyam Galhotra, Barna Saha, Roy Schwartz
We consider two variations of fairness constraint for the problem of correlation clustering where each node has a color, and the goal is to form clusters that do not over-represent vertices of any color.
no code implementations • 7 Sep 2019 • Saba Ahmadi, Faez Ahmed, John P. Dickerson, Mark Fuge, Samir Khuller
Bipartite b-matching, where agents on one side of a market are matched to one or more agents or items on the other, is a classical model that is used in myriad application areas such as healthcare, advertising, education, and general resource allocation.