no code implementations • 15 Apr 2024 • Peifei Zhu, Tsubasa Takahashi, Hirokatsu Kataoka
Diffusion Models (DMs) have shown remarkable capabilities in various image-generation tasks.
no code implementations • 16 Feb 2024 • Genki Osada, Tsubasa Takahashi, Takashi Nishide
Finally, we provide evidence of the potential applicability of our hypothesis in another DGM, PixelCNN++.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • ICCV 2023 • Peifei Zhu, Genki Osada, Hirokatsu Kataoka, Tsubasa Takahashi
We observe that existing spatial attacks cause large degradation in image quality and find the loss of high-frequency detailed components might be its major reason.
no code implementations • 6 Jul 2022 • Ryuichi Ito, Seng Pei Liew, Tsubasa Takahashi, Yuya Sasaki, Makoto Onizuka
Applying Differentially Private Stochastic Gradient Descent (DPSGD) to training modern, large-scale neural networks such as transformer-based models is a challenging task, as the magnitude of noise added to the gradients at each iteration scales with model dimension, hindering the learning capability significantly.
1 code implementation • 20 Jun 2022 • Seng Pei Liew, Tsubasa Takahashi
We study Gaussian mechanism in the shuffle model of differential privacy (DP).
1 code implementation • 7 Jun 2022 • Seng Pei Liew, Satoshi Hasegawa, Tsubasa Takahashi
We study a protocol for distributed computation called shuffled check-in, which achieves strong privacy guarantees without requiring any further trust assumptions beyond a trusted shuffler.
no code implementations • 8 Apr 2022 • Seng Pei Liew, Tsubasa Takahashi, Shun Takagi, Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa
However, introducing a centralized entity to the originally local privacy model loses some appeals of not having any centralized entity as in local differential privacy.
1 code implementation • ICLR 2022 • Seng Pei Liew, Tsubasa Takahashi, Michihiko Ueno
We propose a new framework of synthesizing data using deep generative models in a differentially private manner.
no code implementations • 27 Oct 2020 • Seng Pei Liew, Tsubasa Takahashi
We investigate if one can leak or infer such private information without interacting with the teacher model directly.
2 code implementations • 22 Jun 2020 • Shun Takagi, Tsubasa Takahashi, Yang Cao, Masatoshi Yoshikawa
The state-of-the-art approach for this problem is to build a generative model under differential privacy, which offers a rigorous privacy guarantee.
no code implementations • 19 Jun 2020 • Tsubasa Takahashi, Shun Takagi, Hajime Ono, Tatsuya Komatsu
This paper studies how to learn variational autoencoders with a variety of divergences under differential privacy constraints.
no code implementations • 19 Feb 2020 • Tsubasa Takahashi
In this paper, we demonstrate that the node classifier can be deceived with high-confidence by poisoning just a single node even two-hops or more far from the target.
no code implementations • 31 Jan 2020 • Hajime Ono, Tsubasa Takahashi
To the best of our knowledge, this is the first work that actualizes distributed reinforcement learning under LDP.
no code implementations • 20 Nov 2018 • Hajime Ono, Tsubasa Takahashi, Kazuya Kakizaki
Lipschitz margin training (LMT) is a scalable certified defense, but it can also only achieve small robustness due to over-regularization.