When Attackers Meet AI: Learning-empowered Attacks in Cooperative Spectrum Sensing

4 May 2019  ·  Zhengping Luo, Shangqing Zhao, Zhuo Lu, Jie Xu, Yalin E. Sagduyu ·

Defense strategies have been well studied to combat Byzantine attacks that aim to disrupt cooperative spectrum sensing by sending falsified versions of spectrum sensing data to a fusion center. However, existing studies usually assume network or attackers as passive entities, e.g., assuming the prior knowledge of attacks is known or fixed. In practice, attackers can actively adopt arbitrary behaviors and avoid pre-assumed patterns or assumptions used by defense strategies. In this paper, we revisit this security vulnerability as an adversarial machine learning problem and propose a novel learning-empowered attack framework named Learning-Evaluation-Beating (LEB) to mislead the fusion center. Based on the black-box nature of the fusion center in cooperative spectrum sensing, our new perspective is to make the adversarial use of machine learning to construct a surrogate model of the fusion center's decision model. We propose a generic algorithm to create malicious sensing data using this surrogate model. Our real-world experiments show that the LEB attack is effective to beat a wide range of existing defense strategies with an up to 82% of success ratio. Given the gap between the proposed LEB attack and existing defenses, we introduce a non-invasive method named as influence-limiting defense, which can coexist with existing defenses to defend against LEB attack or other similar attacks. We show that this defense is highly effective and reduces the overall disruption ratio of LEB attack by up to 80%.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here