Search Results for author: Chaoran Li

Found 4 papers, 1 papers with code

Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models

no code implementations14 Oct 2019 Derui, Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang

First, such attacks must acquire the outputs from the models by multiple times before actually launching attacks, which is difficult for the MitM adversary in practice.

BIG-bench Machine Learning

Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection

no code implementations10 Aug 2018 Xiao Chen, Chaoran Li, Derui Wang, Sheng Wen, Jun Zhang, Surya Nepal, Yang Xiang, Kui Ren

In contrast to existing works, the adversarial examples crafted by our method can also deceive recent machine learning based detectors that rely on semantic features such as control-flow-graph.

Cryptography and Security

Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training

no code implementations14 Mar 2018 Derek Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang

For example, proactive defending methods are invalid against grey-box or white-box attacks, while reactive defending methods are challenged by low-distortion adversarial examples or transferring adversarial examples.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.