1 code implementation • 20 Mar 2024 • Kaito Shiku, Shinnosuke Matsuo, Daiki Suehiro, Ryoma Bise
Existing MIL methods are unsuitable for LML due to aggregating confidences, which may lead to inconsistency between the bag-level label and the label obtained by counting the number of instances for each class.
no code implementations • 20 Oct 2023 • Yuya Saito, Shinnosuke Matsuo, Seiichi Uchida, Daiki Suehiro
This paper tackles the problem of the worst-class error rate, instead of the standard error rate averaged over all classes.
1 code implementation • 13 Sep 2023 • Shinnosuke Matsuo, Xiaomeng Wu, Gantugs Atarsaikhan, Akisato Kimura, Kunio Kashino, Brian Kenji Iwana, Seiichi Uchida
Unlike other learnable models using DTW for warping, our model predicts all local correspondences between two time series and is trained based on metric learning, which enables it to learn the optimal data-dependent warping for the target task.
no code implementations • ICCV 2023 • Takanori Asanomi, Shinnosuke Matsuo, Daiki Suehiro, Ryoma Bise
In this paper, we propose a bag-level data augmentation method for LLP called MixBag, based on the key observation from our preliminary experiments; that the instance-level classification accuracy improves as the number of labeled bags increases even though the total number of instances is fixed.
1 code implementation • 17 Feb 2023 • Shinnosuke Matsuo, Ryoma Bise, Seiichi Uchida, Daiki Suehiro
This paper proposes a novel and efficient method for Learning from Label Proportions (LLP), whose goal is to train a classifier only by using the class label proportions of instance sets, called bags.
1 code implementation • 5 Nov 2021 • Daisuke Oba, Shinnosuke Matsuo, Brian Kenji Iwana
We propose a neural network that dynamically selects the best combination of data augmentation methods using a mutually beneficial gating network and a feature consistency loss.
1 code implementation • 28 Mar 2021 • Shinnosuke Matsuo, Xiaomeng Wu, Gantugs Atarsaikhan, Akisato Kimura, Kunio Kashino, Brian Kenji Iwana, Seiichi Uchida
This approach adapts a parameterized attention model to time warping for greater and more adaptive temporal invariance.
no code implementations • 8 Mar 2021 • Shinnosuke Matsuo, Seiichi Uchida, Brian Kenji Iwana
To exploit this fact, we propose the use of self-augmentation and combine it with multi-modal feature embedding.