Search Results for author: Minzhi Ji

Found 2 papers, 1 papers with code

Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features

no code implementations19 Jul 2021 Hui Liu, Bo Zhao, Minzhi Ji, Yuefeng Peng, Jiabao Guo, Peng Liu

In this paper, we reveal that imperceptible adversarial examples are the product of recessive features misleading neural networks, and an adversarial attack is essentially a kind of method to enrich these recessive features in the image.

Adversarial Attack

GreedyFool: Multi-Factor Imperceptibility and Its Application to Designing a Black-box Adversarial Attack

1 code implementation14 Oct 2020 Hui Liu, Bo Zhao, Minzhi Ji, Peng Liu

Adversarial examples are well-designed input samples, in which perturbations are imperceptible to the human eyes, but easily mislead the output of deep neural networks (DNNs).

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.