Visible to the public Biblio

Found 744 results

Filters: Keyword is machine learning  [Clear All Filters]
2020-10-26
Clincy, Victor, Shahriar, Hossain.  2019.  IoT Malware Analysis. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:920–921.
IoT devices can be used to fulfil many of our daily tasks. IoT could be wearable devices, home appliances, or even light bulbs. With the introduction of this new technology, however, vulnerabilities are being introduced and can be leveraged or exploited by malicious users. One common vehicle of exploitation is malicious software, or malware. Malware can be extremely harmful and compromise the confidentiality, integrity and availability (CIA triad) of information systems. This paper analyzes the types of malware attacks, introduce some mitigation approaches and discusses future challenges.
Uchnár, Matúš, Feciľak, Peter.  2019.  Behavioral malware analysis algorithm comparison. 2019 IEEE 17th World Symposium on Applied Machine Intelligence and Informatics (SAMI). :397–400.
Malware analysis and detection based on it is very important factor in the computer security. Despite of the enormous effort of companies making anti-malware solutions, it is usually not possible to respond to new malware in time and some computers will get infected. This shortcoming could be partially mitigated through using behavioral malware analysis. This work is aimed towards machine learning algorithms comparison for the behavioral malware analysis purposes.
Walker, Aaron, Sengupta, Shamik.  2019.  Insights into Malware Detection via Behavioral Frequency Analysis Using Machine Learning. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–6.
The most common defenses against malware threats involves the use of signatures derived from instances of known malware. However, the constant evolution of the malware threat landscape necessitates defense against unknown malware, making a signature catalog of known threats insufficient to prevent zero-day vulnerabilities from being exploited. Recent research has applied machine learning approaches to identify malware through artifacts of malicious activity as observed through dynamic behavioral analysis. We have seen that these approaches mimic common malware defenses by simply offering a method of detecting known malware. We contribute a new method of identifying software as malicious or benign through analysis of the frequency of Windows API system function calls. We show that this is a powerful technique for malware detection because it generates learning models which understand the difference between malicious and benign software, rather than producing a malware signature classifier. We contribute a method of systematically comparing machine learning models against different datasets to determine their efficacy in accurately distinguishing the difference between malicious and benign software.
2020-10-12
Rudd-Orthner, Richard N M, Mihaylova, Lyudmilla.  2019.  An Algebraic Expert System with Neural Network Concepts for Cyber, Big Data and Data Migration. 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1–6.

This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.

2020-10-05
Rafati, Jacob, DeGuchy, Omar, Marcia, Roummel F..  2018.  Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques. 2018 26th European Signal Processing Conference (EUSIPCO). :2015—2019.

Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.

Zhou, Xingyu, Li, Yi, Barreto, Carlos A., Li, Jiani, Volgyesi, Peter, Neema, Himanshu, Koutsoukos, Xenofon.  2019.  Evaluating Resilience of Grid Load Predictions under Stealthy Adversarial Attacks. 2019 Resilience Week (RWS). 1:206–212.
Recent advances in machine learning enable wider applications of prediction models in cyber-physical systems. Smart grids are increasingly using distributed sensor settings for distributed sensor fusion and information processing. Load forecasting systems use these sensors to predict future loads to incorporate into dynamic pricing of power and grid maintenance. However, these inference predictors are highly complex and thus vulnerable to adversarial attacks. Moreover, the adversarial attacks are synthetic norm-bounded modifications to a limited number of sensors that can greatly affect the accuracy of the overall predictor. It can be much cheaper and effective to incorporate elements of security and resilience at the earliest stages of design. In this paper, we demonstrate how to analyze the security and resilience of learning-based prediction models in power distribution networks by utilizing a domain-specific deep-learning and testing framework. This framework is developed using DeepForge and enables rapid design and analysis of attack scenarios against distributed smart meters in a power distribution network. It runs the attack simulations in the cloud backend. In addition to the predictor model, we have integrated an anomaly detector to detect adversarial attacks targeting the predictor. We formulate the stealthy adversarial attacks as an optimization problem to maximize prediction loss while minimizing the required perturbations. Under the worst-case setting, where the attacker has full knowledge of both the predictor and the detector, an iterative attack method has been developed to solve for the adversarial perturbation. We demonstrate the framework capabilities using a GridLAB-D based power distribution network model and show how stealthy adversarial attacks can affect smart grid prediction systems even with a partial control of network.
2020-09-28
Liu, Kai, Zhou, Yun, Wang, Qingyong, Zhu, Xianqiang.  2019.  Vulnerability Severity Prediction With Deep Neural Network. 2019 5th International Conference on Big Data and Information Analytics (BigDIA). :114–119.
High frequency of network security incidents has also brought a lot of negative effects and even huge economic losses to countries, enterprises and individuals in recent years. Therefore, more and more attention has been paid to the problem of network security. In order to evaluate the newly included vulnerability text information accurately, and to reduce the workload of experts and the false negative rate of the traditional method. Multiple deep learning methods for vulnerability text classification evaluation are proposed in this paper. The standard Cross Site Scripting (XSS) vulnerability text data is processed first, and then classified using three kinds of deep neural networks (CNN, LSTM, TextRCNN) and one kind of traditional machine learning method (XGBoost). The dropout ratio of the optimal CNN network, the epoch of all deep neural networks and training set data were tuned via experiments to improve the fit on our target task. The results show that the deep learning methods evaluate vulnerability risk levels better, compared with traditional machine learning methods, but cost more time. We train our models in various training sets and test with the same testing set. The performance and utility of recurrent convolutional neural networks (TextRCNN) is highest in comparison to all other methods, which classification accuracy rate is 93.95%.
Li, Lin, Wei, Linfeng.  2019.  Automatic XSS Detection and Automatic Anti-Anti-Virus Payload Generation. 2019 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :71–76.
In the Web 2.0 era, user interaction makes Web application more diverse, but brings threats, among which XSS vulnerability is the common and pernicious one. In order to promote the efficiency of XSS detection, this paper investigates the parameter characteristics of malicious XSS attacks. We identify whether a parameter is malicious or not through detecting user input parameters with SVM algorithm. The original malicious XSS parameters are deformed by DQN algorithm for reinforcement learning for rule-based WAF to be anti-anti-virus. Based on this method, we can identify whether a specific WAF is secure. The above model creates a more efficient automatic XSS detection tool and a more targeted automatic anti-anti-virus payload generation tool. This paper also explores the automatic generation of XSS attack codes with RNN LSTM algorithm.
Akaishi, Sota, Uda, Ryuya.  2019.  Classification of XSS Attacks by Machine Learning with Frequency of Appearance and Co-occurrence. 2019 53rd Annual Conference on Information Sciences and Systems (CISS). :1–6.
Cross site scripting (XSS) attack is one of the attacks on the web. It brings session hijack with HTTP cookies, information collection with fake HTML input form and phishing with dummy sites. As a countermeasure of XSS attack, machine learning has attracted a lot of attention. There are existing researches in which SVM, Random Forest and SCW are used for the detection of the attack. However, in the researches, there are problems that the size of data set is too small or unbalanced, and that preprocessing method for vectorization of strings causes misclassification. The highest accuracy of the classification was 98% in existing researches. Therefore, in this paper, we improved the preprocessing method for vectorization by using word2vec to find the frequency of appearance and co-occurrence of the words in XSS attack scripts. Moreover, we also used a large data set to decrease the deviation of the data. Furthermore, we evaluated the classification results with two procedures. One is an inappropriate procedure which some researchers tend to select by mistake. The other is an appropriate procedure which can be applied to an attack detection filter in the real environment.
Abie, Habtamu.  2019.  Cognitive Cybersecurity for CPS-IoT Enabled Healthcare Ecosystems. 2019 13th International Symposium on Medical Information and Communication Technology (ISMICT). :1–6.

Cyber Physical Systems (CPS)-Internet of Things (IoT) enabled healthcare services and infrastructures improve human life, but are vulnerable to a variety of emerging cyber-attacks. Cybersecurity specialists are finding it hard to keep pace of the increasingly sophisticated attack methods. There is a critical need for innovative cognitive cybersecurity for CPS-IoT enabled healthcare ecosystem. This paper presents a cognitive cybersecurity framework for simulating the human cognitive behaviour to anticipate and respond to new and emerging cybersecurity and privacy threats to CPS-IoT and critical infrastructure systems. It includes the conceptualisation and description of a layered architecture which combines Artificial Intelligence, cognitive methods and innovative security mechanisms.

Chen, Yuqi, Poskitt, Christopher M., Sun, Jun.  2018.  Learning from Mutants: Using Code Mutation to Learn and Monitor Invariants of a Cyber-Physical System. 2018 IEEE Symposium on Security and Privacy (SP). :648–660.
Cyber-physical systems (CPS) consist of sensors, actuators, and controllers all communicating over a network; if any subset becomes compromised, an attacker could cause significant damage. With access to data logs and a model of the CPS, the physical effects of an attack could potentially be detected before any damage is done. Manually building a model that is accurate enough in practice, however, is extremely difficult. In this paper, we propose a novel approach for constructing models of CPS automatically, by applying supervised machine learning to data traces obtained after systematically seeding their software components with faults ("mutants"). We demonstrate the efficacy of this approach on the simulator of a real-world water purification plant, presenting a framework that automatically generates mutants, collects data traces, and learns an SVM-based model. Using cross-validation and statistical model checking, we show that the learnt model characterises an invariant physical property of the system. Furthermore, we demonstrate the usefulness of the invariant by subjecting the system to 55 network and code-modification attacks, and showing that it can detect 85% of them from the data logs generated at runtime.
Sliwa, Benjamin, Haferkamp, Marcus, Al-Askary, Manar, Dorn, Dennis, Wietfeld, Christian.  2018.  A radio-fingerprinting-based vehicle classification system for intelligent traffic control in smart cities. 2018 Annual IEEE International Systems Conference (SysCon). :1–5.
The measurement and provision of precise and up-to-date traffic-related key performance indicators is a key element and crucial factor for intelligent traffic control systems in upcoming smart cities. The street network is considered as a highly-dynamic Cyber Physical System (CPS) where measured information forms the foundation for dynamic control methods aiming to optimize the overall system state. Apart from global system parameters like traffic flow and density, specific data, such as velocity of individual vehicles as well as vehicle type information, can be leveraged for highly sophisticated traffic control methods like dynamic type-specific lane assignments. Consequently, solutions for acquiring these kinds of information are required and have to comply with strict requirements ranging from accuracy over cost-efficiency to privacy preservation. In this paper, we present a system for classifying vehicles based on their radio-fingerprint. In contrast to other approaches, the proposed system is able to provide real-time capable and precise vehicle classification as well as cost-efficient installation and maintenance, privacy preservation and weather independence. The system performance in terms of accuracy and resource-efficiency is evaluated in the field using comprehensive measurements. Using a machine learning based approach, the resulting success ratio for classifying cars and trucks is above 99%.
Shen, Jingyi, Baysal, Olga, Shafiq, M. Omair.  2019.  Evaluating the Performance of Machine Learning Sentiment Analysis Algorithms in Software Engineering. 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :1023–1030.
In recent years, sentiment analysis has been aware within software engineering domain. While automated sentiment analysis has long been suffering from doubt of accuracy, the tool performance is unstable when being applied on datasets other than the original dataset for evaluation. Researchers also have the disagreements upon if machine learning algorithms perform better than conventional lexicon and rule based approaches. In this paper, we looked into the factors in datasets that may affect the evaluation performance, also evaluated the popular machine learning algorithms in sentiment analysis, then proposed a novel structure for automated sentiment tool combines advantages from both approaches.
Piskachev, Goran, Nguyen Quang Do, Lisa, Johnson, Oshando, Bodden, Eric.  2019.  SWAN\_ASSIST: Semi-Automated Detection of Code-Specific, Security-Relevant Methods. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1094–1097.
To detect specific types of bugs and vulnerabilities, static analysis tools must be correctly configured with security-relevant methods (SRM), e.g., sources, sinks, sanitizers and authentication methods-usually a very labour-intensive and error-prone process. This work presents the semi-automated tool SWAN\_ASSIST, which aids the configuration with an IntelliJ plugin based on active machine learning. It integrates our novel automated machine-learning approach SWAN, which identifies and classifies Java SRM. SWAN\_ASSIST further integrates user feedback through iterative learning. SWAN\_ASSIST aids developers by asking them to classify at each point in time exactly those methods whose classification best impact the classification result. Our experiments show that SWAN\_ASSIST classifies SRM with a high precision, and requires a relatively low effort from the user. A video demo of SWAN\_ASSIST can be found at https://youtu.be/fSyD3V6EQOY. The source code is available at https://github.com/secure-software-engineering/swan.
2020-09-21
Chow, Ka-Ho, Wei, Wenqi, Wu, Yanzhao, Liu, Ling.  2019.  Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :1282–1291.
Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
2020-09-18
Zolanvari, Maede, Teixeira, Marcio A., Gupta, Lav, Khan, Khaled M., Jain, Raj.  2019.  Machine Learning-Based Network Vulnerability Analysis of Industrial Internet of Things. IEEE Internet of Things Journal. 6:6822—6834.
It is critical to secure the Industrial Internet of Things (IIoT) devices because of potentially devastating consequences in case of an attack. Machine learning (ML) and big data analytics are the two powerful leverages for analyzing and securing the Internet of Things (IoT) technology. By extension, these techniques can help improve the security of the IIoT systems as well. In this paper, we first present common IIoT protocols and their associated vulnerabilities. Then, we run a cyber-vulnerability assessment and discuss the utilization of ML in countering these susceptibilities. Following that, a literature review of the available intrusion detection solutions using ML models is presented. Finally, we discuss our case study, which includes details of a real-world testbed that we have built to conduct cyber-attacks and to design an intrusion detection system (IDS). We deploy backdoor, command injection, and Structured Query Language (SQL) injection attacks against the system and demonstrate how a ML-based anomaly detection system can perform well in detecting these attacks. We have evaluated the performance through representative metrics to have a fair point of view on the effectiveness of the methods.
2020-09-14
Ortiz Garcés, Ivan, Cazares, Maria Fernada, Andrade, Roberto Omar.  2019.  Detection of Phishing Attacks with Machine Learning Techniques in Cognitive Security Architecture. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :366–370.
The number of phishing attacks has increased in Latin America, exceeding the operational skills of cybersecurity analysts. The cognitive security application proposes the use of bigdata, machine learning, and data analytics to improve response times in attack detection. This paper presents an investigation about the analysis of anomalous behavior related with phishing web attacks and how machine learning techniques can be an option to face the problem. This analysis is made with the use of an contaminated data sets, and python tools for developing machine learning for detect phishing attacks through of the analysis of URLs to determinate if are good or bad URLs in base of specific characteristics of the URLs, with the goal of provide realtime information for take proactive decisions that minimize the impact of an attack.
2020-09-11
Garip, Mevlut Turker, Lin, Jonathan, Reiher, Peter, Gerla, Mario.  2019.  SHIELDNET: An Adaptive Detection Mechanism against Vehicular Botnets in VANETs. 2019 IEEE Vehicular Networking Conference (VNC). :1—7.
Vehicular ad hoc networks (VANETs) are designed to provide traffic safety by enabling vehicles to broadcast information-such as speed, location and heading-through inter-vehicular communications to proactively avoid collisions. However, the attacks targeting these networks might overshadow their advantages if not protected against. One powerful threat against VANETs is vehicular botnets. In our earlier work, we demonstrated several vehicular botnet attacks that can have damaging impacts on the security and privacy of VANETs. In this paper, we present SHIELDNET, the first detection mechanism against vehicular botnets. Similar to the detection approaches against Internet botnets, we target the vehicular botnet communication and use several machine learning techniques to identify vehicular bots. We show via simulation that SHIELDNET can identify 77 percent of the vehicular bots. We propose several improvements on the VANET standards and show that their existing vulnerabilities make an effective defense against vehicular botnets infeasible.
Azakami, Tomoka, Shibata, Chihiro, Uda, Ryuya, Kinoshita, Toshiyuki.  2019.  Creation of Adversarial Examples with Keeping High Visual Performance. 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). :52—56.
The accuracy of the image classification by the convolutional neural network is exceeding the ability of human being and contributes to various fields. However, the improvement of the image recognition technology gives a great blow to security system with an image such as CAPTCHA. In particular, since the character string CAPTCHA has already added distortion and noise in order not to be read by the computer, it becomes a problem that the human readability is lowered. Adversarial examples is a technique to produce an image letting an image classification by the machine learning be wrong intentionally. The best feature of this technique is that when human beings compare the original image with the adversarial examples, they cannot understand the difference on appearance. However, Adversarial examples that is created with conventional FGSM cannot completely misclassify strong nonlinear networks like CNN. Osadchy et al. have researched to apply this adversarial examples to CAPTCHA and attempted to let CNN misclassify them. However, they could not let CNN misclassify character images. In this research, we propose a method to apply FGSM to the character string CAPTCHAs and to let CNN misclassified them.
Shekhar, Heemany, Moh, Melody, Moh, Teng-Sheng.  2019.  Exploring Adversaries to Defend Audio CAPTCHA. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1155—1161.
CAPTCHA is a web-based authentication method used by websites to distinguish between humans (valid users) and bots (attackers). Audio captcha is an accessible captcha meant for the visually disabled section of users such as color-blind, blind, near-sighted users. Firstly, this paper analyzes how secure current audio captchas are from attacks using machine learning (ML) and deep learning (DL) models. Each audio captcha is made up of five, seven or ten random digits[0-9] spoken one after the other along with varying background noise throughout the length of the audio. If the ML or DL model is able to correctly identify all spoken digits and in the correct order of occurance in a single audio captcha, we consider that captcha to be broken and the attack to be successful. Throughout the paper, accuracy refers to the attack model's success at breaking audio captchas. The higher the attack accuracy, the more unsecure the audio captchas are. In our baseline experiments, we found that attack models could break audio captchas that had no background noise or medium background noise with any number of spoken digits with nearly 99% to 100% accuracy. Whereas, audio captchas with high background noise were relatively more secure with attack accuracy of 85%. Secondly, we propose that the concepts of adversarial examples algorithms can be used to create a new kind of audio captcha that is more resilient towards attacks. We found that even after retraining the models on the new adversarial audio data, the attack accuracy remained as low as 25% to 36% only. Lastly, we explore the benefits of creating adversarial audio captcha through different algorithms such as Basic Iterative Method (BIM) and deepFool. We found that as long as the attacker has less than 45% sample from each kinds of adversarial audio datasets, the defense will be successful at preventing attacks.
2020-09-04
Elkanishy, Abdelrahman, Badawy, Abdel-Hameed A., Furth, Paul M., Boucheron, Laura E., Michael, Christopher P..  2019.  Machine Learning Bluetooth Profile Operation Verification via Monitoring the Transmission Pattern. 2019 53rd Asilomar Conference on Signals, Systems, and Computers. :2144—2148.
Manufacturers often buy and/or license communication ICs from third-party suppliers. These communication ICs are then integrated into a complex computational system, resulting in a wide range of potential hardware-software security issues. This work proposes a compact supervisory circuit to classify the Bluetooth profile operation of a Bluetooth System-on-Chip (SoC) at low frequencies by monitoring the radio frequency (RF) output power of the Bluetooth SoC. The idea is to inexpensively manufacture an RF envelope detector to monitor the RF output power and a profile classification algorithm on a custom low-frequency integrated circuit in a low-cost legacy technology. When the supervisory circuit observes unexpected behavior, it can shut off power to the Bluetooth SoC. In this preliminary work, we proto-type the supervisory circuit using off-the-shelf components to collect a sufficient data set to train 11 different Machine Learning models. We extract smart descriptive time-domain features from the envelope of the RF output signal. Then, we train the machine learning models to classify three different Bluetooth operation profiles: sensor, hands-free, and headset. Our results demonstrate 100% classification accuracy with low computational complexity.
Khan, Aasher, Rehman, Suriya, Khan, Muhammad U.S, Ali, Mazhar.  2019.  Synonym-based Attack to Confuse Machine Learning Classifiers Using Black-box Setting. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1—7.
Twitter being the most popular content sharing platform is giving rise to automated accounts called “bots”. Majority of the users on Twitter are bots. Various machine learning (ML) algorithms are designed to detect bots avoiding the vulnerability constraints of ML-based models. This paper contributes to exploit vulnerabilities of machine learning (ML) algorithms through black-box attack. An adversarial text sequence misclassifies the results of deep learning (DL) classifiers for bot detection. Literature shows that ML models are vulnerable to attacks. The aim of this paper is to compromise the accuracy of ML-based bot detection algorithms by replacing original words in tweets with their synonyms. Our results show 7.2% decrease in the accuracy for bot tweets, therefore classifying bot tweets as legitimate tweets.
Zhao, Pu, Liu, Sijia, Chen, Pin-Yu, Hoang, Nghia, Xu, Kaidi, Kailkhura, Bhavya, Lin, Xue.  2019.  On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). :121—130.
Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations. Despite the long-term vision, however, existing studies on black-box adversarial attacks are still restricted to very specific settings of threat models (e.g., single distortion metric and restrictive assumption on target model's feedback to queries) and/or suffer from prohibitively high query complexity. To push for further advances in this field, we introduce a general framework based on an operator splitting method, the alternating direction method of multipliers (ADMM) to devise efficient, robust black-box attacks that work with various distortion metrics and feedback settings without incurring high query complexity. Due to the black-box nature of the threat model, the proposed ADMM solution framework is integrated with zeroth-order (ZO) optimization and Bayesian optimization (BO), and thus is applicable to the gradient-free regime. This results in two new black-box adversarial attack generation methods, ZO-ADMM and BO-ADMM. Our empirical evaluations on image classification datasets show that our proposed approaches have much lower function query complexities compared to state-of-the-art attack methods, but achieve very competitive attack success rates.
Tsingenopoulos, Ilias, Preuveneers, Davy, Joosen, Wouter.  2019.  AutoAttacker: A reinforcement learning approach for black-box adversarial attacks. 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :229—237.
Recent research has shown that machine learning models are susceptible to adversarial examples, allowing attackers to trick a machine learning model into making a mistake and producing an incorrect output. Adversarial examples are commonly constructed or discovered by using gradient-based methods that require white-box access to the model. In most real-world AI system deployments, having complete access to the machine learning model is an unrealistic threat model. However, it is possible for an attacker to construct adversarial examples even in the black-box case - where we assume solely a query capability to the model - with a variety of approaches each with its advantages and shortcomings. We introduce AutoAttacker, a novel reinforcement learning framework where agents learn how to operate around the black-box model by querying it, to effectively extract the underlying decision behaviour, and to undermine it successfully. AutoAttacker is a first of kind framework that uses reinforcement learning and assumes nothing about the differentiability or structure of the underlying function and is thus robust towards common defenses like gradient obfuscation or adversarial training. Finally, without differentiable output, as in binary classification, most methods cease to operate and require either an approximation of the gradient, or another approach altogether. Our approach, however, maintains the capability to function when the output descriptiveness diminishes.
Usama, Muhammad, Qayyum, Adnan, Qadir, Junaid, Al-Fuqaha, Ala.  2019.  Black-box Adversarial Machine Learning Attack on Network Traffic Classification. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :84—89.

Deep machine learning techniques have shown promising results in network traffic classification, however, the robustness of these techniques under adversarial threats is still in question. Deep machine learning models are found vulnerable to small carefully crafted adversarial perturbations posing a major question on the performance of deep machine learning techniques. In this paper, we propose a black-box adversarial attack on network traffic classification. The proposed attack successfully evades deep machine learning-based classifiers which highlights the potential security threat of using deep machine learning techniques to realize autonomous networks.