no code implementations • 19 Oct 2023 • Ryotaro Mitsuboshi, Kohei Hatano, Eiji Takimoto
Metarounding is an approach to convert an approximation algorithm for linear optimization over some combinatorial classes to an online linear optimization algorithm for the same class.
no code implementations • 28 Jun 2023 • Yaxiong Liu, Atsuyoshi Nakamura, Kohei Hatano, Eiji Takimoto
Then, we show the lower bound to the pure exploration in multi-armed bandits with low rank sequence.
no code implementations • 8 Jun 2023 • Yiping Tang, Kohei Hatano, Eiji Takimoto
Some previous work proposes to transform neural networks into equivalent Boolean expressions and apply verification techniques for characteristics of interest.
1 code implementation • 22 Sep 2022 • Ryotaro Mitsuboshi, Kohei Hatano, Eiji Takimoto
LPBoost rapidly converges to an $\epsilon$-approximate solution in practice, but it is known to take $\Omega(m)$ iterations in the worst case, where $m$ is the sample size.
no code implementations • 10 Dec 2020 • Yaxiong Liu, Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto
We consider a variant of online semi-definite programming problem (OSDP): The decision space consists of semi-definite matrices with bounded $\Gamma$-trace norm, which is a generalization of trace norm defined by a positive definite matrix $\Gamma.$ To solve this problem, we utilise the follow-the-regularized-leader algorithm with a $\Gamma$-dependent log-determinant regularizer.
no code implementations • 15 Jul 2020 • Yaxiong Liu, Kohei Hatano, Eiji Takimoto
The cost is the makespan if the norm is $L_\infty$-norm.
1 code implementation • 31 May 2020 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda
We propose a new formulation of Multiple-Instance Learning (MIL), in which a unit of data consists of a set of instances called a bag.
no code implementations • 20 Nov 2018 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda
Classifiers based on a single shapelet are not sufficiently strong for certain applications.
no code implementations • 27 Oct 2017 • Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto
Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well.
no code implementations • 5 Sep 2017 • Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda
We consider binary classification problems using local features of objects.
no code implementations • 8 Feb 2016 • Atsushi Shibagaki, Masayuki Karasuyama, Kohei Hatano, Ichiro Takeuchi
A significant advantage of considering them simultaneously rather than individually is that they have a synergy effect in the sense that the results of the previous safe feature screening can be exploited for improving the next safe sample screening performances, and vice-versa.
no code implementations • 5 Dec 2013 • Nir Ailon, Kohei Hatano, Eiji Takimoto
Unfortunately, CombBand requires at each step an $n$-by-$n$ matrix permanent approximation to within improved accuracy as $T$ grows, resulting in a total running time that is super linear in $T$, making it impractical for large time horizons.