no code implementations • 30 Jun 2021 • Bowei Xi, Yujie Chen, Fan Fei, Zhan Tu, Xinyan Deng
Hence in a successful physical attack against DNN, targeted motion against the system should also be considered.
no code implementations • 30 Jun 2021 • Bowei Xi
Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques -- they are vulnerable to carefully crafted attacks from malicious adversaries.
no code implementations • 30 Jun 2021 • Juan Shu, Bowei Xi, Charles Kamhoua
Our study reveals the structural problem of DNN classification boundary that leads to the adversarial examples.
no code implementations • 11 May 2018 • Yan Zhou, Murat Kantarcioglu, Bowei Xi
We demonstrate that introducing randomness to the DNN models is sufficient to defeat adversarial attacks, given that the adversary does not have an unlimited attack budget.
no code implementations • 13 Apr 2018 • Wutao Wei, Bowei Xi, Murat Kantarcioglu
Most of the previous work focused on adversarial classification techniques, which assumed the existence of a reasonably large amount of carefully labeled data instances.