no code implementations • 11 Dec 2023 • Fengpeng Li, Kemou Li, Jinyu Tian, Jiantao Zhou
The deep model training procedure requires large-scale datasets of annotated data.
1 code implementation • 19 Oct 2023 • Jun Liu, Jiantao Zhou, Haiwei Wu, Weiwei Sun, Jinyu Tian
In this work, we aim to design a new framework for generating robust AEs that can survive the OSN transmission; namely, the AEs before and after the OSN transmission both possess strong attack capabilities.
1 code implementation • 19 Oct 2023 • Jun Liu, Jiantao Zhou, Jinyu Tian, Weiwei Sun
Extensive experiments demonstrate that 1) the classification accuracy of the classifier trained in the plaintext domain remains the same in both the ciphertext and plaintext domains; 2) the encrypted images can be recovered into their original form with an average PSNR of up to 51+ dB for the SVHN dataset and 48+ dB for the VGGFace2 dataset; 3) our system exhibits satisfactory generalization capability on the encryption, decryption and classification tasks across datasets that are different from the training one; and 4) a high-level of security is achieved against three potential threat models.
1 code implementation • 26 Sep 2023 • Liu jun, Zhou Jiantao, Zeng Jiandian, Jinyu Tian
In addition, due to the avoidance of using surrogate models' gradient information when optimizing AEs for black-box models, our proposed DifAttack inherently possesses better attack capability in the open-set scenario, where the training dataset of the victim model is unknown.
1 code implementation • 4 Aug 2023 • Jiacheng Deng, Li Dong, Jiahao Chen, Diqun Yan, Rangding Wang, Dengpan Ye, Lingchen Zhao, Jinyu Tian
In this work, we propose a novel and effective defense mechanism termed the Universal Defensive Underpainting Patch (UDUP) that modifies the underpainting of text images instead of the characters.
Optical Character Recognition Optical Character Recognition (OCR)
1 code implementation • CVPR 2022 • Haiwei Wu, Jiantao Zhou, Jinyu Tian, Jun Liu
To fight against the OSN-shared forgeries, in this work, a novel robust training scheme is proposed.
no code implementations • CVPR 2021 • Jinyu Tian, Jiantao Zhou, Jia Duan
Model protection is vital when deploying Convolutional Neural Networks (CNNs) for commercial services, due to the massive costs of training them.
no code implementations • NeurIPS 2021 • Zhaoxi Zhang, Leo Yu Zhang, Xufei Zheng, Jinyu Tian, Jiantao Zhou
To alleviate this problem, we explore how to detect adversarial examples with disentangled label/semantic features under the autoencoder structure.
1 code implementation • 7 Mar 2021 • Jinyu Tian, Jiantao Zhou, Yuanman Li, Jia Duan
Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples (AEs), which are maliciously designed to cause dramatic model output errors.