no code implementations • 13 Feb 2024 • AprilPyone MaungMaung, Huy H. Nguyen, Hitoshi Kiya, Isao Echizen
To this end, we utilize an existing approach of personalizing large-scale text-to-image diffusion models with available discovered spurious images and propose a new spurious feature similarity loss based on neural features of an adversarially robust model.
no code implementations • 15 Jan 2024 • Tinghui Ouyang, AprilPyone MaungMaung, Koichi Konishi, Yoshiki Seo, Isao Echizen
In the era of large AI models, the complex architecture and vast parameters present substantial challenges for effective AI quality management (AIQM), e. g. large language model (LLM).
no code implementations • 28 Nov 2023 • AprilPyone MaungMaung, Isao Echizen, Hitoshi Kiya
In this paper, we propose key-based defense model proliferation by leveraging pre-trained models and utilizing recent efficient fine-tuning techniques on ImageNet-1k classification.
no code implementations • 4 Sep 2023 • AprilPyone MaungMaung, Isao Echizen, Hitoshi Kiya
In this paper, we propose a new key-based defense focusing on both efficiency and robustness.
no code implementations • 9 Mar 2023 • AprilPyone MaungMaung, Hitoshi Kiya
By taking advantage of leaked information from encrypted images, we propose a guided generative model as an attack on learnable image encryption to recover personally identifiable visual information.
no code implementations • 14 Feb 2023 • AprilPyone MaungMaung, Makoto Shing, Kentaro Mitsui, Kei Sawada, Fumio Okura
To this end, we leverage knowledge from recent large-scale pre-trained generative models, resulting in text-guided sketch-to-photo synthesis without the need for reference images.
no code implementations • 12 Jan 2023 • Zheng Qi, AprilPyone MaungMaung, Hitoshi Kiya
In recent years, with the development of cloud computing platforms, privacy-preserving methods for deep learning have become an urgent problem.
no code implementations • 29 Sep 2022 • Teru Nagamori, Hiroki Ito, AprilPyone MaungMaung, Hitoshi Kiya
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models but also with robustness against unauthorized access without a key.
no code implementations • 16 Sep 2022 • AprilPyone MaungMaung, Hitoshi Kiya
In this paper, we propose an attack method to block scrambled face images, particularly Encryption-then-Compression (EtC) applied images by utilizing the existing powerful StyleGAN encoder and decoder for the first time.
no code implementations • 4 Aug 2022 • Zheng Qi, AprilPyone MaungMaung, Hitoshi Kiya
In this paper, we propose a privacy-preserving image classification method using encrypted images under the use of the ConvMixer structure.
no code implementations • 11 Jun 2022 • Hiroki Ito, AprilPyone MaungMaung, Sayaka Shiota, Hitoshi Kiya
In this paper, we propose an access control method with a secret key for semantic segmentation models for the first time so that unauthorized users without a secret key cannot benefit from the performance of trained models.
no code implementations • 24 May 2022 • Zheng Qi, AprilPyone MaungMaung, Yuma Kinoshita, Hitoshi Kiya
In this paper, we propose a privacy-preserving image classification method that is based on the combined use of encrypted images and the vision transformer (ViT).
no code implementations • 16 Apr 2022 • AprilPyone MaungMaung, Hitoshi Kiya
In addition, compressible encrypted images, called encryption-then-compression (EtC) images, can be used for both training and testing without any adaptation network.
no code implementations • 26 Jan 2022 • Hitoshi Kiya, AprilPyone MaungMaung, Yuma Kinoshita, Shoko Imaizumi, Sayaka Shiota
In this paper, we focus on a class of image transformation referred to as learnable image encryption, which is applicable to privacy-preserving machine learning and adversarially robust defense.
no code implementations • 17 Nov 2021 • Ryota Iijima, AprilPyone MaungMaung, Hitoshi Kiya
In this paper, we propose a block-wise image transformation method with a secret key for support vector machine (SVM) models.
no code implementations • 31 May 2021 • AprilPyone MaungMaung, Hitoshi Kiya
In this paper, we propose a novel method for protecting convolutional neural network (CNN) models with a secret key set so that unauthorized users without the correct key set cannot access trained models.