Search Results for author: Han Shao

Found 13 papers, 1 papers with code

Learnability Gaps of Strategic Classification

no code implementations29 Feb 2024 Lee Cohen, Yishay Mansour, Shay Moran, Han Shao

We essentially show that any learnable class is also strategically learnable: we first consider a fully informative setting, where the manipulation structure (which is modeled by a manipulation graph $G^\star$) is known and during training time the learner has access to both the pre-manipulation data and post-manipulation data.

Classification Multi-Label Learning

On the Effect of Defections in Federated Learning and How to Prevent Them

no code implementations28 Nov 2023 Minbiao Han, Kumar Kshitij Patel, Han Shao, Lingxiao Wang

Federated learning is a machine learning protocol that enables a large population of agents to collaborate over multiple rounds to produce a single consensus model.

Federated Learning

Incentivized Collaboration in Active Learning

no code implementations1 Nov 2023 Lee Cohen, Han Shao

In collaborative active learning, where multiple agents try to learn labels from a common hypothesis, we introduce an innovative framework for incentivized collaboration.

Active Learning

Strategic Classification under Unknown Personalized Manipulation

no code implementations NeurIPS 2023 Han Shao, Avrim Blum, Omar Montasser

Ball manipulations are a widely studied class of manipulations in the literature, where agents can modify their feature vector within a bounded radius ball.

Classification

A Theory of PAC Learnability under Transformation Invariances

no code implementations15 Feb 2022 Han Shao, Omar Montasser, Avrim Blum

One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal.

Data Augmentation Image Classification

Accurately Solving Rod Dynamics with Graph Learning

no code implementations NeurIPS 2021 Han Shao, Tassilo Kugelstadt, Torsten Hädrich, Wojtek Palubicki, Jan Bender, Soeren Pirk, Dominik Michels

In this contribution, we introduce a novel method to accelerate iterative solvers for rod dynamics with graph networks (GNs) by predicting the initial guesses to reduce the number of iterations.

Graph Learning

One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning

1 code implementation4 Mar 2021 Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao

In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents.

Federated Learning

Robust learning under clean-label attack

no code implementations1 Mar 2021 Avrim Blum, Steve Hanneke, Jian Qian, Han Shao

We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) correctly-labeled examples to the training set to fool the algorithm into making mistakes on specific test instances at test time.

Data Poisoning PAC learning

Online Learning with Primary and Secondary Losses

no code implementations NeurIPS 2020 Avrim Blum, Han Shao

On the positive side, we show that running any switching-limited algorithm can achieve this goal if all experts satisfy the assumption that the secondary loss does not exceed the linear threshold by $o(T)$ for any time interval.

Stochastic Bandits with Vector Losses: Minimizing $\ell^\infty$-Norm of Relative Losses

no code implementations15 Oct 2020 Xuedong Shang, Han Shao, Jian Qian

We study two goals: (a) finding the arm with the minimum $\ell^\infty$-norm of relative losses with a given confidence level (which refers to fixed-confidence best-arm identification); (b) minimizing the $\ell^\infty$-norm of cumulative relative losses (which refers to regret minimization).

Multi-Armed Bandits Recommendation Systems

Structure Adaptive Algorithms for Stochastic Bandits

no code implementations ICML 2020 Rémy Degenne, Han Shao, Wouter M. Koolen

We study reward maximisation in a wide class of structured stochastic multi-armed bandit problems, where the mean rewards of arms satisfy some given structural constraints, e. g. linear, unimodal, sparse, etc.

Accurately Solving Physical Systems with Graph Learning

no code implementations6 Jun 2020 Han Shao, Tassilo Kugelstadt, Torsten Hädrich, Wojciech Pałubicki, Jan Bender, Sören Pirk, Dominik L. Michels

In this contribution, we introduce a novel method to accelerate iterative solvers for physical systems with graph networks (GNs) by predicting the initial guesses to reduce the number of iterations.

Graph Learning

Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs

no code implementations NeurIPS 2018 Han Shao, Xiaotian Yu, Irwin King, Michael R. Lyu

In this paper, under a weaker assumption on noises, we study the problem of \underline{lin}ear stochastic {\underline b}andits with h{\underline e}avy-{\underline t}ailed payoffs (LinBET), where the distributions have finite moments of order $1+\epsilon$, for some $\epsilon\in (0, 1]$.

Cannot find the paper you are looking for? You can Submit a new open access paper.