no code implementations • 10 Feb 2024 • Tajima Shinji, Ren Sugihara, Ryota Kitahara, Masayuki Karasuyama
LAGRA learns importance weights for small attributed subgraphs, called attributed graphlets (AGs), while simultaneously optimizing their attribute vectors.
no code implementations • 22 Nov 2023 • Ryota Ozaki, Kazuki Ishikawa, Youhei Kanzaki, Shinya Suzuki, Shion Takeno, Ichiro Takeuchi, Masayuki Karasuyama
There are a lot of real-world black-box optimization problems that need to optimize multiple criteria simultaneously.
no code implementations • 7 Nov 2023 • Shion Takeno, Yu Inatsu, Masayuki Karasuyama, Ichiro Takeuchi
We show that PIMS achieves the tighter BCR bound and avoids the hyperparameter tuning, unlike GP-UCB.
1 code implementation • 3 Feb 2023 • Shion Takeno, Masahiro Nomura, Masayuki Karasuyama
This observation motivates us to improve the MCMC-based estimation for skew GP, for which we show the practical efficiency of Gibbs sampling and derive the low variance MC estimator.
no code implementations • 3 Feb 2023 • Shion Takeno, Yu Inatsu, Masayuki Karasuyama
Gaussian process upper confidence bound (GP-UCB) is a theoretically promising approach for black-box optimization; however, the confidence parameter $\beta$ is considerably large in the theorem and chosen heuristically in practice.
no code implementations • 31 Jan 2022 • Yu Inatsu, Shion Takeno, Masayuki Karasuyama, Ichiro Takeuchi
In black-box function optimization, we need to consider not only controllable design variables but also uncontrollable stochastic environment variables.
no code implementations • 16 Nov 2021 • Shunya Kusakawa, Shion Takeno, Yu Inatsu, Kentaro Kutsukake, Shogo Iwazaki, Takashi Nakano, Toru Ujihara, Masayuki Karasuyama, Ichiro Takeuchi
A cascade process is a multistage process in which the output of one stage is used as an input for the subsequent stage.
no code implementations • 19 Feb 2021 • Shion Takeno, Tomoyuki Tamura, Kazuki Shitara, Masayuki Karasuyama
Max-value entropy search (MES) is one of the state-of-the-art approaches in Bayesian optimization (BO).
no code implementations • 15 Mar 2020 • Shion Takeno, Yuhki Tsukada, Hitoshi Fukuoka, Toshiyuki Koyama, Motoki Shiga, Masayuki Karasuyama
Hence, we considered estimating a region of material parameter space in which a computational model produces precipitates having shapes similar to those observed in the experimental images.
2 code implementations • 3 Feb 2020 • Tomoki Yoshida, Ichiro Takeuchi, Masayuki Karasuyama
Hence, we propose a supervised distance metric learning method for the graph classification problem.
no code implementations • 13 Sep 2019 • Yu Inatsu, Masayuki Karasuyama, Keiichi Inoue, Ichiro Takeuchi
As part of a quality control process in manufacturing it is often necessary to test whether all parts of a product satisfy a required property, with as few inspections as possible.
no code implementations • ICML 2020 • Shinya Suzuki, Shion Takeno, Tomoyuki Tamura, Kazuki Shitara, Masayuki Karasuyama
The entropy search is successful approach to Bayesian optimization.
no code implementations • 6 May 2019 • Vo Nguyen Le Duy, Takuto Sakuma, Taiju Ishiyama, Hiroki Toda, Kazuya Nishi, Masayuki Karasuyama, Yuta Okubo, Masayuki Sunaga, Yasuo Tabei, Ichiro Takeuchi
Given two groups of trajectories, the goal of this problem is to extract moving patterns in the form of sub-trajectories which are more similar to sub-trajectories of one group and less similar to those of the other.
no code implementations • ICML 2020 • Shion Takeno, Hitoshi Fukuoka, Yuhki Tsukada, Toshiyuki Koyama, Motoki Shiga, Ichiro Takeuchi, Masayuki Karasuyama
In this paper, we focus on the information-based approach, which is a popular and empirically successful approach in BO.
1 code implementation • 12 Feb 2018 • Tomoki Yoshida, Ichiro Takeuchi, Masayuki Karasuyama
Distance metric learning can optimize a metric over a set of triplets, each one of which is defined by a pair of same class instances and an instance in a different class.
no code implementations • 15 Feb 2016 • Kazuya Nakagawa, Shinya Suzumura, Masayuki Karasuyama, Koji Tsuda, Ichiro Takeuchi
The SPP method allows us to efficiently find a superset of all the predictive patterns in the database that are needed for the optimal predictive model.
no code implementations • 8 Feb 2016 • Atsushi Shibagaki, Masayuki Karasuyama, Kohei Hatano, Ichiro Takeuchi
A significant advantage of considering them simultaneously rather than individually is that they have a synergy effect in the sense that the results of the previous safe feature screening can be exploited for improving the next safe sample screening performances, and vice-versa.
no code implementations • 12 Jul 2015 • Shinya Suzumura, Kohei Ogawa, Masashi Sugiyama, Masayuki Karasuyama, Ichiro Takeuchi
An advantage of our homotopy approach is that it can be interpreted as simulated annealing, a common approach for finding a good local optimal solution in non-convex optimization problems.
no code implementations • 26 Jun 2015 • Kazuya Nakagawa, Shinya Suzumura, Masayuki Karasuyama, Koji Tsuda, Ichiro Takeuchi
An SFS rule has a property that, if a feature satisfies the rule, then the feature is guaranteed to be non-active in the LASSO solution, meaning that it can be safely screened-out prior to the LASSO training process.
1 code implementation • NeurIPS 2015 • Atsushi Shibagaki, Yoshiki Suzuki, Masayuki Karasuyama, Ichiro Takeuchi
Careful tuning of a regularization parameter is indispensable in many machine learning tasks because it has a significant impact on generalization performances.
no code implementations • NeurIPS 2013 • Masayuki Karasuyama, Hiroshi Mamitsuka
In this approach, edge weights represent both similarity and local reconstruction weight simultaneously, both being reasonable for label propagation.
no code implementations • NeurIPS 2009 • Masayuki Karasuyama, Ichiro Takeuchi
Conventional single cremental decremental SVM can update the trained model efficiently when single data point is added to or removed from the training set.