no code implementations • 8 Mar 2024 • Jongyeong Lee, Junya Honda, Shinji Ito, Min-hwan Oh
In this paper, we establish a sufficient condition for perturbations to achieve $\mathcal{O}(\sqrt{KT})$ regrets in the adversarial setting, which covers, e. g., Fr\'{e}chet, Pareto, and Student-$t$ distributions.
no code implementations • 1 Oct 2023 • Jongyeong Lee, Junya Honda, Masashi Sugiyama
This paper studies the fixed-confidence best arm identification (BAI) problem in the bandit framework in the canonical single-parameter exponential models.
no code implementations • 28 Feb 2023 • Jongyeong Lee, Chao-Kai Chiang, Masashi Sugiyama
Although the uniform prior is shown to be optimal, we highlight the inherent limitation of its optimality, which is limited to specific parameterizations and emphasizes the significance of the invariance property of priors.
no code implementations • 3 Feb 2023 • Jongyeong Lee, Junya Honda, Chao-Kai Chiang, Masashi Sugiyama
In addition to the empirical performance, TS has been shown to achieve asymptotic problem-dependent lower bounds in several models.
no code implementations • 5 Jan 2021 • Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama
When minimizing the empirical risk in binary classification, it is a common practice to replace the zero-one loss with a surrogate loss to make the learning objective feasible to optimize.
no code implementations • IJCNLP 2019 • Nontawat Charoenphakdee, Jongyeong Lee, Yiping Jin, Dittaya Wanvarie, Masashi Sugiyama
We consider a document classification problem where document labels are absent but only relevant keywords of a target class and unlabeled documents are given.
no code implementations • 30 Jan 2019 • Jongyeong Lee, Nontawat Charoenphakdee, Seiichi Kuroki, Masashi Sugiyama
Appropriately evaluating the discrepancy between domains is essential for the success of unsupervised domain adaptation.
1 code implementation • 27 Jan 2019 • Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama
This paper aims to provide a better understanding of a symmetric loss.