no code implementations • 26 Mar 2024 • Shohei Enomoto, Naoya Hasegawa, Kazuki Adachi, Taku Sasaki, Shin'ya Yamaguchi, Satoshi Suzuki, Takeharu Eda
We hypothesize that enhancing the input image reduces prediction's uncertainty and increase the accuracy of TTA methods.
no code implementations • 21 Mar 2024 • Kazuki Adachi, Shohei Enomoto, Taku Sasaki, Shin'ya Yamaguchi
However, the uncertainty cannot be computed in the same way as classification in re-id since it is an open-set task, which does not share person labels between training and testing.
no code implementations • 15 Mar 2024 • Shin'ya Yamaguchi, Sekitoshi Kanai, Kazuki Adachi, Daiki Chijiwa
To this end, AdaRand minimizes the gap between feature vectors and random reference vectors that are sampled from class conditional Gaussian distributions.
no code implementations • 22 Nov 2023 • Shin'ya Yamaguchi, Takuma Fukuda
Synthetic samples from diffusion models are promising for leveraging in training discriminative models as replications of real training datasets.
no code implementations • 28 Sep 2023 • Shin'ya Yamaguchi
Instead of using real unlabeled datasets, we propose an SSL method using synthetic datasets generated from generative foundation models trained on datasets containing millions of samples in diverse domains (e. g., ImageNet).
no code implementations • ICCV 2023 • Satoshi Suzuki, Shin'ya Yamaguchi, Shoichiro Takeda, Sekitoshi Kanai, Naoki Makishima, Atsushi Ando, Ryo Masumura
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against adversarial examples in deep neural networks (DNNs).
no code implementations • 9 Jun 2023 • Masanori Yamada, Tomoya Yamashita, Shin'ya Yamaguchi, Daiki Chijiwa
We also show that merged models require datasets for merging in order to achieve a high accuracy.
no code implementations • 21 Jul 2022 • Sekitoshi Kanai, Shin'ya Yamaguchi, Masanori Yamada, Hiroshi Takahashi, Kentaro Ohno, Yasutoshi Ida
This paper proposes a new loss function for adversarial training.
1 code implementation • 31 May 2022 • Daiki Chijiwa, Shin'ya Yamaguchi, Atsutoshi Kumagai, Yasutoshi Ida
Few-shot learning for neural networks (NNs) is an important problem that aims to train NNs with a few data.
no code implementations • 28 Apr 2022 • Kazuki Adachi, Shin'ya Yamaguchi, Atsutoshi Kumagai
Test-time adaptation (TTA), which aims to adapt models without accessing the training dataset, is one of the settings that can address this problem.
no code implementations • 27 Apr 2022 • Shin'ya Yamaguchi, Sekitoshi Kanai, Atsutoshi Kumagai, Daiki Chijiwa, Hisashi Kashima
To transfer source knowledge without these assumptions, we propose a transfer learning method that uses deep generative models and is composed of the following two stages: pseudo pre-training (PP) and pseudo semi-supervised learning (P-SSL).
no code implementations • 9 Feb 2022 • Kazuki Adachi, Shin'ya Yamaguchi
Under this type of distribution shift, CNNs learn to focus on features that are not task-relevant, such as backgrounds from the training data, and degrade their accuracy on the test data.
1 code implementation • NeurIPS 2021 • Daiki Chijiwa, Shin'ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, Tomohiro Inoue
Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis.
no code implementations • ICCV 2021 • Shin'ya Yamaguchi, Sekitoshi Kanai
The key idea of F-Drop is to filter out unnecessary high-frequency components from the input images of the discriminators.
no code implementations • 6 Oct 2020 • Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida
We theoretically and empirically reveal that small logits by addition of a common activation function, e. g., hyperbolic tangent, do not improve adversarial robustness since input vectors of the function (pre-logit vectors) can have large norms.
no code implementations • 25 Dec 2019 • Shin'ya Yamaguchi, Sekitoshi Kanai, Takeharu Eda
When trained on each target dataset reduced the samples to 5, 000 images, Domain Fusion achieves better classification accuracy than the data augmentation using fine-tuned GANs.
no code implementations • 25 Dec 2019 • Shin'ya Yamaguchi, Sekitoshi Kanai, Tetsuya Shioda, Shoichiro Takeda
The rotation prediction (Rotation) is a simple pretext-task for self-supervised learning (SSL), where models learn useful representations for target vision tasks by solving pretext-tasks.