no code implementations • 12 Jul 2022 • Hitoshi Kiya, Ryota Iijima, MaungMaung AprilPyone, Yuma Kinoshita
In this paper, we propose a combined use of transformed images and vision transformer (ViT) models transformed with a secret key.
no code implementations • 5 Feb 2022 • Takayuki Osakabe, MaungMaung AprilPyone, Sayaka Shiota, Hitoshi Kiya
Deep neural network (DNN) models are wellknown to easily misclassify prediction results by using input images with small perturbations, called adversarial examples.
no code implementations • 3 Sep 2021 • Hiroki Ito, MaungMaung AprilPyone, Hitoshi Kiya
In an experiment, the protected models were demonstrated to allow rightful users to obtain almost the same performance as that of non-protected models but also to be robust against access by unauthorized users without a key.
no code implementations • 1 Sep 2021 • MaungMaung AprilPyone, Hitoshi Kiya
In this paper, we propose a model protection method for convolutional neural networks (CNNs) with a secret key so that authorized users get a high classification accuracy, and unauthorized users get a low classification accuracy.
no code implementations • 20 Jul 2021 • Hiroki Ito, MaungMaung AprilPyone, Hitoshi Kiya
Since production-level trained deep neural networks (DNNs) are of a great business value, protecting such DNN models against copyright infringement and unauthorized access is in a rising demand.
no code implementations • 9 Apr 2021 • MaungMaung AprilPyone, Hitoshi Kiya
In this paper, we propose a novel DNN watermarking method that utilizes a learnable image transformation method with a secret key.
no code implementations • 5 Mar 2021 • MaungMaung AprilPyone, Hitoshi Kiya
Models with pre-trained weights are fine-tuned by using such transformed images.
no code implementations • 16 Nov 2020 • MaungMaung AprilPyone, Hitoshi Kiya
In the proposed ensemble, a number of models are trained by using images transformed with different keys and block sizes, and then a voting ensemble is applied to the models.
no code implementations • 2 Oct 2020 • MaungMaung AprilPyone, Hitoshi Kiya
In the best-case scenario, a model trained by using images transformed by FFX Encryption (block size of 4) yielded an accuracy of 92. 30% on clean images and 91. 48% under PGD attack with a noise distance of 8/255, which is close to the non-robust accuracy (95. 45%) for the CIFAR-10 dataset, and it yielded an accuracy of 72. 18% on clean images and 71. 43% under the same attack, which is also close to the standard accuracy (73. 70%) for the ImageNet dataset.
no code implementations • 6 Aug 2020 • MaungMaung AprilPyone, Hitoshi Kiya
In this paper, we propose a model protection method by using block-wise pixel shuffling with a secret key as a preprocessing technique to input images for the first time.
no code implementations • 16 May 2020 • MaungMaung AprilPyone, Hitoshi Kiya
The experiments are carried out on both adaptive and non-adaptive maximum-norm bounded white-box attacks while considering obfuscated gradients.
no code implementations • 31 Jul 2019 • MaungMaung AprilPyone, Warit Sirichotedumrong, Hitoshi Kiya
Data for deep learning should be protected for privacy preserving.