no code implementations • 11 Apr 2015 • Shota Okumura, Yoshiki Suzuki, Ichiro Takeuchi
This property is quite advantageous in a typical sensitivity analysis task where only a small number of instances are updated.
1 code implementation • NeurIPS 2015 • Atsushi Shibagaki, Yoshiki Suzuki, Masayuki Karasuyama, Ichiro Takeuchi
Careful tuning of a regularization parameter is indispensable in many machine learning tasks because it has a significant impact on generalization performances.
no code implementations • 10 Feb 2014 • Yoshiki Suzuki, Kohei Ogawa, Yuki Shinmura, Ichiro Takeuchi
If a reasonably good suboptimal model is available, our algorithm can compute lower and upper bounds of many useful quantities for making inferences on the unknown target model.
no code implementations • 27 Jan 2014 • Kohei Ogawa, Yoshiki Suzuki, Shinya Suzumura, Ichiro Takeuchi
Sparse classifiers such as the support vector machines (SVM) are efficient in test-phases because the classifier is characterized only by a subset of the samples called support vectors (SVs), and the rest of the samples (non SVs) have no influence on the classification result.