Search Results for author: Tsubasa Takahashi

Found 14 papers, 4 papers with code

Frequency-aware GAN for Adversarial Manipulation Generation

no code implementations ICCV 2023 Peifei Zhu, Genki Osada, Hirokatsu Kataoka, Tsubasa Takahashi

We observe that existing spatial attacks cause large degradation in image quality and find the loss of high-frequency detailed components might be its major reason.

Adversarial Attack Decoder +1

Scaling Private Deep Learning with Low-Rank and Sparse Gradients

no code implementations6 Jul 2022 Ryuichi Ito, Seng Pei Liew, Tsubasa Takahashi, Yuya Sasaki, Makoto Onizuka

Applying Differentially Private Stochastic Gradient Descent (DPSGD) to training modern, large-scale neural networks such as transformer-based models is a challenging task, as the magnitude of noise added to the gradients at each iteration scales with model dimension, hindering the learning capability significantly.

Shuffle Gaussian Mechanism for Differential Privacy

1 code implementation20 Jun 2022 Seng Pei Liew, Tsubasa Takahashi

We study Gaussian mechanism in the shuffle model of differential privacy (DP).

Federated Learning

Privacy Amplification via Shuffled Check-Ins

1 code implementation7 Jun 2022 Seng Pei Liew, Satoshi Hasegawa, Tsubasa Takahashi

We study a protocol for distributed computation called shuffled check-in, which achieves strong privacy guarantees without requiring any further trust assumptions beyond a trusted shuffler.

Federated Learning

Network Shuffling: Privacy Amplification via Random Walks

no code implementations8 Apr 2022 Seng Pei Liew, Tsubasa Takahashi, Shun Takagi, Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa

However, introducing a centralized entity to the originally local privacy model loses some appeals of not having any centralized entity as in local differential privacy.

PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning

1 code implementation ICLR 2022 Seng Pei Liew, Tsubasa Takahashi, Michihiko Ueno

We propose a new framework of synthesizing data using deep generative models in a differentially private manner.

FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries

no code implementations27 Oct 2020 Seng Pei Liew, Tsubasa Takahashi

We investigate if one can leak or infer such private information without interacting with the teacher model directly.

Face Recognition Transfer Learning

P3GM: Private High-Dimensional Data Release via Privacy Preserving Phased Generative Model

2 code implementations22 Jun 2020 Shun Takagi, Tsubasa Takahashi, Yang Cao, Masatoshi Yoshikawa

The state-of-the-art approach for this problem is to build a generative model under differential privacy, which offers a rigorous privacy guarantee.

Privacy Preserving

Differentially Private Variational Autoencoders with Term-wise Gradient Aggregation

no code implementations19 Jun 2020 Tsubasa Takahashi, Shun Takagi, Hajime Ono, Tatsuya Komatsu

This paper studies how to learn variational autoencoders with a variety of divergences under differential privacy constraints.

Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks

no code implementations19 Feb 2020 Tsubasa Takahashi

In this paper, we demonstrate that the node classifier can be deceived with high-confidence by poisoning just a single node even two-hops or more far from the target.

General Classification Node Classification

Locally Private Distributed Reinforcement Learning

no code implementations31 Jan 2020 Hajime Ono, Tsubasa Takahashi

To the best of our knowledge, this is the first work that actualizes distributed reinforcement learning under LDP.

reinforcement-learning Reinforcement Learning (RL)

Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

no code implementations20 Nov 2018 Hajime Ono, Tsubasa Takahashi, Kazuya Kakizaki

Lipschitz margin training (LMT) is a scalable certified defense, but it can also only achieve small robustness due to over-regularization.

Cannot find the paper you are looking for? You can Submit a new open access paper.