Visible to the public NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips

TitleNeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips
Publication TypeConference Paper
Year of Publication2020
AuthorsVenceslai, Valerio, Marchisio, Alberto, Alouani, Ihsen, Martina, Maurizio, Shafique, Muhammad
Conference Name2020 International Joint Conference on Neural Networks (IJCNN)
Keywordsadversarial attacks, Artificial neural networks, Biological neural networks, Biological system modeling, Cross Layer Security, cross-layer, cyber physical systems, deep neural networks, DNN, fault-injection attacks, Hardware, machine learning, Metrics, policy-based governance, pubcrawl, reliability, resilience, Resiliency, security, SNN, Spiking Neural Networks

Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.

Citation Keyvenceslai_neuroattack_2020