TY - GEN
T1 - Application of Adversarial Machine learning in Protocol and Modulation Misclassification
AU - Krunz, Marwan
AU - Zhang, Wenhan
AU - Ditzler, Gregory
N1 - Funding Information: This research was supported by the U.S. Army Small Business Innovation Research Program Office and the Army Research Office under Contract No. W911NF-21-C-0016, by NSF (grants CNS-1563655, CNS-1731164, IIP-1822071, and CAREER Award #1943552), and by the Broadband Wireless Access & Applications Center (BWAC). Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the author(s) and do not necessarily reflect the views of NSF or ARO. Publisher Copyright: © 2022 SPIE.
PY - 2022
Y1 - 2022
N2 - This paper explores the application of adversarial machine learning (AML) in RF communications, and more specifically the impact of intelligently crafted AML perturbations on the accuracy of deep neural network (DNN) based technology (protocol) and modulation-scheme classifiers. For protocol classification, we consider multiple heterogeneous wireless technologies that operate over shared spectrum, exemplified by the coexistence of Wi-Fi, LTE LAA (Licensed Assisted Access), and 5G NR-Unlicensed (5G NR-U) devices in the unlicensed 5 GHz bands. Time-interleaving-based spectrum sharing is assumed. Given a window of received I/Q samples, a legitimate DNN-based classifier (called the defender’s classifier) is often used to identify the underlying protocol/technology. Similarly, DNN classifiers are often used to discern the underlying modulation scheme. For both types of classifiers, we study an attack model in which an adversarial device eavesdrops on ongoing transmissions and uses its own attacker’s classifier to generate low-power AML perturbations that significantly degrade the accuracy of the defender’s classifier. We consider several DNN architectures for protocol and modulation classification (based on recurrent and convolutional neural networks) that normally exhibit high classification accuracy under random noise (i.e., AWGN). By applying AML-generated perturbations, we show how the accuracy of these classifiers degrades significantly, even when the signal-to-perturbation ratio (SPR) is high. Several attack vectors are formulated, depending on how much knowledge the attacker has of the defender’s classifier. On the one extreme, we study a “white-box” attack, whereby the attacker has complete knowledge of the defender’s classifier and its training dataset. We gradually relax this assuming, ultimately considering an almost “black-box” attack. Mitigation techniques based on AML training are presented and are shown to help in countering AML attacks.
AB - This paper explores the application of adversarial machine learning (AML) in RF communications, and more specifically the impact of intelligently crafted AML perturbations on the accuracy of deep neural network (DNN) based technology (protocol) and modulation-scheme classifiers. For protocol classification, we consider multiple heterogeneous wireless technologies that operate over shared spectrum, exemplified by the coexistence of Wi-Fi, LTE LAA (Licensed Assisted Access), and 5G NR-Unlicensed (5G NR-U) devices in the unlicensed 5 GHz bands. Time-interleaving-based spectrum sharing is assumed. Given a window of received I/Q samples, a legitimate DNN-based classifier (called the defender’s classifier) is often used to identify the underlying protocol/technology. Similarly, DNN classifiers are often used to discern the underlying modulation scheme. For both types of classifiers, we study an attack model in which an adversarial device eavesdrops on ongoing transmissions and uses its own attacker’s classifier to generate low-power AML perturbations that significantly degrade the accuracy of the defender’s classifier. We consider several DNN architectures for protocol and modulation classification (based on recurrent and convolutional neural networks) that normally exhibit high classification accuracy under random noise (i.e., AWGN). By applying AML-generated perturbations, we show how the accuracy of these classifiers degrades significantly, even when the signal-to-perturbation ratio (SPR) is high. Several attack vectors are formulated, depending on how much knowledge the attacker has of the defender’s classifier. On the one extreme, we study a “white-box” attack, whereby the attacker has complete knowledge of the defender’s classifier and its training dataset. We gradually relax this assuming, ultimately considering an almost “black-box” attack. Mitigation techniques based on AML training are presented and are shown to help in countering AML attacks.
KW - Shared spectrum
KW - adversarial machine learning
KW - deep learning
KW - signal classification
KW - wireless security
UR - http://www.scopus.com/inward/record.url?scp=85146629082&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85146629082&partnerID=8YFLogxK
U2 - 10.1117/12.2619523
DO - 10.1117/12.2619523
M3 - Conference contribution
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications IV
A2 - Pham, Tien
A2 - Solomon, Latasha
PB - SPIE
T2 - Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications IV 2022
Y2 - 6 June 2022 through 12 June 2022
ER -