Search Results for author: MaungMaung AprilPyone

Found 12 papers, 0 papers with code

Image and Model Transformation with Secret Key for Vision Transformer

no code implementations12 Jul 2022 Hitoshi Kiya, Ryota Iijima, MaungMaung AprilPyone, Yuma Kinoshita

In this paper, we propose a combined use of transformed images and vision transformer (ViT) models transformed with a secret key.

Image Classification

Adversarial Detector with Robust Classifier

no code implementations5 Feb 2022 Takayuki Osakabe, MaungMaung AprilPyone, Sayaka Shiota, Hitoshi Kiya

Deep neural network (DNN) models are wellknown to easily misclassify prediction results by using input images with small perturbations, called adversarial examples.

Access Control Using Spatially Invariant Permutation of Feature Maps for Semantic Segmentation Models

no code implementations3 Sep 2021 Hiroki Ito, MaungMaung AprilPyone, Hitoshi Kiya

In an experiment, the protected models were demonstrated to allow rightful users to obtain almost the same performance as that of non-protected models but also to be robust against access by unauthorized users without a key.

Image Classification Segmentation +1

A Protection Method of Trained CNN Model Using Feature Maps Transformed With Secret Key From Unauthorized Access

no code implementations1 Sep 2021 MaungMaung AprilPyone, Hitoshi Kiya

In this paper, we propose a model protection method for convolutional neural networks (CNNs) with a secret key so that authorized users get a high classification accuracy, and unauthorized users get a low classification accuracy.

Classification

Protecting Semantic Segmentation Models by Using Block-wise Image Encryption with Secret Key from Unauthorized Access

no code implementations20 Jul 2021 Hiroki Ito, MaungMaung AprilPyone, Hitoshi Kiya

Since production-level trained deep neural networks (DNNs) are of a great business value, protecting such DNN models against copyright infringement and unauthorized access is in a rising demand.

Image Classification Segmentation +1

Piracy-Resistant DNN Watermarking by Block-Wise Image Transformation with Secret Key

no code implementations9 Apr 2021 MaungMaung AprilPyone, Hitoshi Kiya

In this paper, we propose a novel DNN watermarking method that utilizes a learnable image transformation method with a secret key.

Transfer Learning-Based Model Protection With Secret Key

no code implementations5 Mar 2021 MaungMaung AprilPyone, Hitoshi Kiya

Models with pre-trained weights are fine-tuned by using such transformed images.

Transfer Learning

Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks

no code implementations16 Nov 2020 MaungMaung AprilPyone, Hitoshi Kiya

In the proposed ensemble, a number of models are trained by using images transformed with different keys and block sizes, and then a voting ensemble is applied to the models.

Image Classification

Block-wise Image Transformation with Secret Key for Adversarially Robust Defense

no code implementations2 Oct 2020 MaungMaung AprilPyone, Hitoshi Kiya

In the best-case scenario, a model trained by using images transformed by FFX Encryption (block size of 4) yielded an accuracy of 92. 30% on clean images and 91. 48% under PGD attack with a noise distance of 8/255, which is close to the non-robust accuracy (95. 45%) for the CIFAR-10 dataset, and it yielded an accuracy of 72. 18% on clean images and 71. 43% under the same attack, which is also close to the standard accuracy (73. 70%) for the ImageNet dataset.

Training DNN Model with Secret Key for Model Protection

no code implementations6 Aug 2020 MaungMaung AprilPyone, Hitoshi Kiya

In this paper, we propose a model protection method by using block-wise pixel shuffling with a secret key as a preprocessing technique to input images for the first time.

Encryption Inspired Adversarial Defense for Visual Classification

no code implementations16 May 2020 MaungMaung AprilPyone, Hitoshi Kiya

The experiments are carried out on both adaptive and non-adaptive maximum-norm bounded white-box attacks while considering obfuscated gradients.

Adversarial Defense Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.