no code implementations • ICML 2020 • Yasutoshi Ida, Sekitoshi Kanai, Yasuhiro Fujiwara, Tomoharu Iwata, Koh Takeuchi, Hisashi Kashima
This is because coordinate descent iteratively updates all the parameters in the objective until convergence.
no code implementations • 15 Mar 2024 • Shin'ya Yamaguchi, Sekitoshi Kanai, Kazuki Adachi, Daiki Chijiwa
To this end, AdaRand minimizes the gap between feature vectors and random reference vectors that are sampled from class conditional Gaussian distributions.
no code implementations • ICCV 2023 • Satoshi Suzuki, Shin'ya Yamaguchi, Shoichiro Takeda, Sekitoshi Kanai, Naoki Makishima, Atsushi Ando, Ryo Masumura
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against adversarial examples in deep neural networks (DNNs).
no code implementations • 14 Mar 2023 • Yasutoshi Ida, Sekitoshi Kanai, Kazuki Adachi, Atsutoshi Kumagai, Yasuhiro Fujiwara
Regularized discrete optimal transport (OT) is a powerful tool to measure the distance between two discrete distributions that have been constructed from data samples on two different domains.
no code implementations • 4 Oct 2022 • Kentaro Ohno, Sekitoshi Kanai, Yasutoshi Ida
We prove that the gradient vanishing of the gate function can be mitigated by accelerating the convergence of the saturating function, i. e., making the output of the function converge to 0 or 1 faster.
no code implementations • 21 Jul 2022 • Sekitoshi Kanai, Shin'ya Yamaguchi, Masanori Yamada, Hiroshi Takahashi, Kentaro Ohno, Yasutoshi Ida
This paper proposes a new loss function for adversarial training.
no code implementations • 27 Apr 2022 • Shin'ya Yamaguchi, Sekitoshi Kanai, Atsutoshi Kumagai, Daiki Chijiwa, Hisashi Kashima
To transfer source knowledge without these assumptions, we propose a transfer learning method that uses deep generative models and is composed of the following two stages: pseudo pre-training (PP) and pseudo semi-supervised learning (P-SSL).
no code implementations • ICCV 2021 • Shin'ya Yamaguchi, Sekitoshi Kanai
The key idea of F-Drop is to filter out unnecessary high-frequency components from the input images of the discriminators.
no code implementations • 2 Mar 2021 • Sekitoshi Kanai, Masanori Yamada, Hiroshi Takahashi, Yuki Yamanaka, Yasutoshi Ida
We reveal that the constraint of adversarial attacks is one cause of the non-smoothness and that the smoothness depends on the types of the constraints.
no code implementations • 5 Feb 2021 • Masanori Yamada, Sekitoshi Kanai, Tomoharu Iwata, Tomokatsu Takahashi, Yuki Yamanaka, Hiroshi Takahashi, Atsutoshi Kumagai
We theoretically and experimentally confirm that the weight loss landscape becomes sharper as the magnitude of the noise of adversarial training increases in the linear logistic regression model.
no code implementations • 6 Oct 2020 • Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida
We theoretically and empirically reveal that small logits by addition of a common activation function, e. g., hyperbolic tangent, do not improve adversarial robustness since input vectors of the function (pre-logit vectors) can have large norms.
no code implementations • 25 Dec 2019 • Shin'ya Yamaguchi, Sekitoshi Kanai, Tetsuya Shioda, Shoichiro Takeda
The rotation prediction (Rotation) is a simple pretext-task for self-supervised learning (SSL), where models learn useful representations for target vision tasks by solving pretext-tasks.
no code implementations • 25 Dec 2019 • Shin'ya Yamaguchi, Sekitoshi Kanai, Takeharu Eda
When trained on each target dataset reduced the samples to 5, 000 images, Domain Fusion achieves better classification accuracy than the data augmentation using fine-tuned GANs.
no code implementations • 19 Sep 2019 • Sekitoshi Kanai, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, Shuichi Adachi
Furthermore, we reveal that robust CNNs with Absum are more robust against transferred attacks due to decreasing the common sensitivity and against high-frequency noise than standard regularization methods.
no code implementations • 26 Mar 2019 • Yuki Yamanaka, Tomoharu Iwata, Hiroshi Takahashi, Masanori Yamada, Sekitoshi Kanai
Since our approach becomes able to reconstruct the normal data points accurately and fails to reconstruct the known and unknown anomalies, it can accurately discriminate both known and unknown anomalies from normal data points.
no code implementations • NeurIPS 2018 • Sekitoshi Kanai, Yasuhiro Fujiwara, Yuki Yamanaka, Shuichi Adachi
On the basis of this analysis, we propose sigsoftmax, which is composed of a multiplication of an exponential function and sigmoid function.
no code implementations • NeurIPS 2017 • Sekitoshi Kanai, Yasuhiro Fujiwara, Sotetsu Iwamura
This problem is caused by an abrupt change in the dynamics of the GRU due to a small variation in the parameters.