Evaluating Adversarial Evasion Attacks in the Context of Wireless Communications

1 Mar 2019  ·  Bryse Flowers, R. Michael Buehrer, William C. Headley ·

Recent advancements in radio frequency machine learning (RFML) have demonstrated the use of raw in-phase and quadrature (IQ) samples for multiple spectrum sensing tasks. Yet, deep learning techniques have been shown, in other applications, to be vulnerable to adversarial machine learning (ML) techniques, which seek to craft small perturbations that are added to the input to cause a misclassification. The current work differentiates the threats that adversarial ML poses to RFML systems based on where the attack is executed from: direct access to classifier input, synchronously transmitted over the air (OTA), or asynchronously transmitted from a separate device. Additionally, the current work develops a methodology for evaluating adversarial success in the context of wireless communications, where the primary metric of interest is bit error rate and not human perception, as is the case in image recognition. The methodology is demonstrated using the well known Fast Gradient Sign Method to evaluate the vulnerabilities of raw IQ based Automatic Modulation Classification and concludes RFML is vulnerable to adversarial examples, even in OTA attacks. However, RFML domain specific receiver effects, which would be encountered in an OTA attack, can present significant impairments to adversarial evasion.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here