Visible to the public Quantifying the impact of adversarial evasion attacks on machine learning based android malware classifiers

TitleQuantifying the impact of adversarial evasion attacks on machine learning based android malware classifiers
Publication TypeConference Paper
Year of Publication2017
AuthorsZ. Abaid, M. A. Kaafar, S. Jha
Conference Name2017 IEEE 16th International Symposium on Network Computing and Applications (NCA)
Date PublishedOct
Keywordsadversarial evasion attacks, android malware classifiers, Android Malware Detection, Android-based devices, Androids, Extraction, feature extraction, Humanoid robots, invasive software, learning (artificial intelligence), linear classifiers, logistic regression, machine learning, malicious apps, Malware, neural nets, Neural Network, nonlinear classifiers, Random Forest, regression analysis, smart phones, statistical analysis, Support vector machines, Training, Training data, user devices
AbstractWith the proliferation of Android-based devices, malicious apps have increasingly found their way to user devices. Many solutions for Android malware detection rely on machine learning; although effective, these are vulnerable to attacks from adversaries who wish to subvert these algorithms and allow malicious apps to evade detection. In this work, we present a statistical analysis of the impact of adversarial evasion attacks on various linear and non-linear classifiers, using a recently proposed Android malware classifier as a case study. We systematically explore the complete space of possible attacks varying in the adversary's knowledge about the classifier; our results show that it is possible to subvert linear classifiers (Support Vector Machines and Logistic Regression) by perturbing only a few features of malicious apps, with more knowledgeable adversaries degrading the classifier's detection rate from 100% to 0% and a completely blind adversary able to lower it to 12%. We show non-linear classifiers (Random Forest and Neural Network) to be more resilient to these attacks. We conclude our study with recommendations for designing classifiers to be more robust to the attacks presented in our work.
Citation Key8171381