no code implementations • 29 Feb 2024 • Lee Cohen, Yishay Mansour, Shay Moran, Han Shao
We essentially show that any learnable class is also strategically learnable: we first consider a fully informative setting, where the manipulation structure (which is modeled by a manipulation graph $G^\star$) is known and during training time the learner has access to both the pre-manipulation data and post-manipulation data.
no code implementations • 28 Nov 2023 • Minbiao Han, Kumar Kshitij Patel, Han Shao, Lingxiao Wang
Federated learning is a machine learning protocol that enables a large population of agents to collaborate over multiple rounds to produce a single consensus model.
no code implementations • 1 Nov 2023 • Lee Cohen, Han Shao
In collaborative active learning, where multiple agents try to learn labels from a common hypothesis, we introduce an innovative framework for incentivized collaboration.
no code implementations • NeurIPS 2023 • Han Shao, Avrim Blum, Omar Montasser
Ball manipulations are a widely studied class of manipulations in the literature, where agents can modify their feature vector within a bounded radius ball.
no code implementations • 15 Feb 2022 • Han Shao, Omar Montasser, Avrim Blum
One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal.
no code implementations • NeurIPS 2021 • Han Shao, Tassilo Kugelstadt, Torsten Hädrich, Wojtek Palubicki, Jan Bender, Soeren Pirk, Dominik Michels
In this contribution, we introduce a novel method to accelerate iterative solvers for rod dynamics with graph networks (GNs) by predicting the initial guesses to reduce the number of iterations.
1 code implementation • 4 Mar 2021 • Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao
In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents.
no code implementations • 1 Mar 2021 • Avrim Blum, Steve Hanneke, Jian Qian, Han Shao
We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) correctly-labeled examples to the training set to fool the algorithm into making mistakes on specific test instances at test time.
no code implementations • NeurIPS 2020 • Avrim Blum, Han Shao
On the positive side, we show that running any switching-limited algorithm can achieve this goal if all experts satisfy the assumption that the secondary loss does not exceed the linear threshold by $o(T)$ for any time interval.
no code implementations • 15 Oct 2020 • Xuedong Shang, Han Shao, Jian Qian
We study two goals: (a) finding the arm with the minimum $\ell^\infty$-norm of relative losses with a given confidence level (which refers to fixed-confidence best-arm identification); (b) minimizing the $\ell^\infty$-norm of cumulative relative losses (which refers to regret minimization).
no code implementations • ICML 2020 • Rémy Degenne, Han Shao, Wouter M. Koolen
We study reward maximisation in a wide class of structured stochastic multi-armed bandit problems, where the mean rewards of arms satisfy some given structural constraints, e. g. linear, unimodal, sparse, etc.
no code implementations • 6 Jun 2020 • Han Shao, Tassilo Kugelstadt, Torsten Hädrich, Wojciech Pałubicki, Jan Bender, Sören Pirk, Dominik L. Michels
In this contribution, we introduce a novel method to accelerate iterative solvers for physical systems with graph networks (GNs) by predicting the initial guesses to reduce the number of iterations.
no code implementations • NeurIPS 2018 • Han Shao, Xiaotian Yu, Irwin King, Michael R. Lyu
In this paper, under a weaker assumption on noises, we study the problem of \underline{lin}ear stochastic {\underline b}andits with h{\underline e}avy-{\underline t}ailed payoffs (LinBET), where the distributions have finite moments of order $1+\epsilon$, for some $\epsilon\in (0, 1]$.