Visible to the public Fooling A Deep-Learning Based Gait Behavioral Biometric System

TitleFooling A Deep-Learning Based Gait Behavioral Biometric System
Publication TypeConference Paper
Year of Publication2020
AuthorsGuo, H., Wang, Z., Wang, B., Li, X., Shila, D. M.
Conference Name2020 IEEE Security and Privacy Workshops (SPW)
Keywordsadversarial learning attacks, Adversarial Machine Learning, authentication, authorisation, Biological system modeling, biometrics (access control), black-box attack, Classification algorithms, composability, counter spoofing threats, deep learning (artificial intelligence), Deep-learning, deep-learning based gait behavioral biometric system, end-user devices, extent FGSM, fast gradient sign method, FGSM, FGSM iterations, gait behavioral biometrics, gradient methods, LSTM, machine learning algorithms, machine learning model, mature techniques, Metrics, privacy, pubcrawl, resilience, Resiliency, security, shadow model, Training, user behavioral information, white box, White Box Security, white-box attacks

We leverage deep learning algorithms on various user behavioral information gathered from end-user devices to classify a subject of interest. In spite of the ability of these techniques to counter spoofing threats, they are vulnerable to adversarial learning attacks, where an attacker adds adversarial noise to the input samples to fool the classifier into false acceptance. Recently, a handful of mature techniques like Fast Gradient Sign Method (FGSM) have been proposed to aid white-box attacks, where an attacker has a complete knowledge of the machine learning model. On the contrary, we exploit a black-box attack to a behavioral biometric system based on gait patterns, by using FGSM and training a shadow model that mimics the target system. The attacker has limited knowledge on the target model and no knowledge of the real user being authenticated, but induces a false acceptance in authentication. Our goal is to understand the feasibility of a black-box attack and to what extent FGSM on shadow models would contribute to its success. Our results manifest that the performance of FGSM highly depends on the quality of the shadow model, which is in turn impacted by key factors including the number of queries allowed by the target system in order to train the shadow model. Our experimentation results have revealed strong relationships between the shadow model and FGSM performance, as well as the effect of the number of FGSM iterations used to create an attack instance. These insights also shed light on deep-learning algorithms' model shareability that can be exploited to launch a successful attack.

Citation Keyguo_fooling_2020