Visible to the public Biblio

Found 251 results

Filters: Keyword is Neural networks  [Clear All Filters]
Freas, Christopher B., Shah, Dhara, Harrison, Robert W..  2021.  Accuracy and Generalization of Deep Learning Applied to Large Scale Attacks. 2021 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
Distributed denial of service attacks threaten the security and health of the Internet. Remediation relies on up-to-date and accurate attack signatures. Signature-based detection is relatively inexpensive computationally. Yet, signatures are inflexible when small variations exist in the attack vector. Attackers exploit this rigidity by altering their attacks to bypass the signatures. Our previous work revealed a critical problem with conventional machine learning models. Conventional models are unable to generalize on the temporal nature of network flow data to classify attacks. We thus explored the use of deep learning techniques on real flow data. We found that a variety of attacks could be identified with high accuracy compared to previous approaches. We show that a convolutional neural network can be implemented for this problem that is suitable for large volumes of data while maintaining useful levels of accuracy.
Viktoriia, Hrechko, Hnatienko, Hrygorii, Babenko, Tetiana.  2021.  An Intelligent Model to Assess Information Systems Security Level. 2021 Fifth World Conference on Smart Trends in Systems Security and Sustainability (WorldS4). :128–133.
This research presents a model for assessing information systems cybersecurity maturity level. The main purpose of the model is to provide comprehensive support for information security specialists and auditors in checking information systems security level, checking security policy implementation, and compliance with security standards. The model synthesized based on controls and practices present in ISO 27001 and ISO 27002 and the neural network of direct signal propagation. The methodology described in this paper can also be extended to synthesis a model for different security control sets and, consequently, to verify compliance with another security standard or policy. The resulting model describes a real non-automated process of assessing the maturity of an IS at an acceptable level and it can be recommended to be used in the process of real audit of Information Security Management Systems.
Sallam, Youssef F., Ahmed, Hossam El-din H., Saleeb, Adel, El-Bahnasawy, Nirmeen A., El-Samie, Fathi E. Abd.  2021.  Implementation of Network Attack Detection Using Convolutional Neural Network. 2021 International Conference on Electronic Engineering (ICEEM). :1–6.
The Internet obviously has a major impact on the global economy and human life every day. This boundless use pushes the attack programmers to attack the data frameworks on the Internet. Web attacks influence the reliability of the Internet and its administrations. These attacks are classified as User-to-Root (U2R), Remote-to-Local (R2L), Denial-of-Service (DoS) and Probing (Probe). Subsequently, making sure about web framework security and protecting data are pivotal. The conventional layers of safeguards like antivirus scanners, firewalls and proxies, which are applied to treat the security weaknesses are insufficient. So, Intrusion Detection Systems (IDSs) are utilized to screen PC and data frameworks for security shortcomings. IDS adds more effectiveness in securing networks against attacks. This paper presents an IDS model based on Deep Learning (DL) with Convolutional Neural Network (CNN) hypothesis. The model has been evaluated on the NSLKDD dataset. It has been trained by Kddtrain+ and tested twice, once using kddtrain+ and the other using kddtest+. The achieved test accuracies are 99.7% and 98.43% with 0.002 and 0.02 wrong alert rates for the two test scenarios, respectively.
Jianhua, Xing, Jing, Si, Yongjing, Zhang, Wei, Li, Yuning, Zheng.  2021.  Research on Malware Variant Detection Method Based on Deep Neural Network. 2021 IEEE 5th International Conference on Cryptography, Security and Privacy (CSP). :144–147.
To deal with the increasingly serious threat of industrial information malicious code, the simulations and characteristics of the domestic security and controllable operating system and office software were implemented in the virtual sandbox environment based on virtualization technology in this study. Firstly, the serialization detection scheme based on the convolution neural network algorithm was improved. Then, the API sequence was modeled and analyzed by the improved convolution neural network algorithm to excavate more local related information of variant sequences. Finally the variant detection of malicious code was realized. Results showed that this improved method had higher efficiency and accuracy for a large number of malicious code detection, and could be applied to the malicious code detection in security and controllable operating system.
Gong, Jianhu.  2021.  Network Information Security Pipeline Based on Grey Relational Cluster and Neural Networks. 2021 5th International Conference on Computing Methodologies and Communication (ICCMC). :971–975.
Network information security pipeline based on the grey relational cluster and neural networks is designed and implemented in this paper. This method is based on the principle that the optimal selected feature set must contain the feature with the highest information entropy gain to the data set category. First, the feature with the largest information gain is selected from all features as the search starting point, and then the sample data set classification mark is fully considered. For the better performance, the neural networks are considered. The network learning ability is directly determined by its complexity. The learning of general complex problems and large sample data will bring about a core dramatic increase in network scale. The proposed model is validated through the simulation.
Agarwal, Shivam, Khatter, Kiran, Relan, Devanjali.  2021.  Security Threat Sounds Classification Using Neural Network. 2021 8th International Conference on Computing for Sustainable Global Development (INDIACom). :690–694.
Sound plays a key role in human life and therefore sound recognition system has a great future ahead. Sound classification and identification system has many applications such as system for personal security, critical surveillance, etc. The main aim of this paper is to detect and classify the security sound event using the surveillance camera systems with integrated microphone based on the generated spectrograms of the sounds. This will enable to track security events in cases of emergencies. The goal is to propose a security system to accurately detect sound events and make a better security sound event detection system. We propose to use a convolutional neural network (CNN) to design the security sound detection system to detect a security event with minimal sound. We used the spectrogram images to train the CNN. The neural network was trained using different security sounds data which was then used to detect security sound events during testing phase. We used two datasets for our experiment training and testing datasets. Both the datasets contain 3 different sound events (glass break, gun shots and smoke alarms) to train and test the model, respectively. The proposed system yields the good accuracy for the sound event detection even with minimum available sound data. The designed system achieved accuracy was 92% and 90% using CNN on training dataset and testing dataset. We conclude that the proposed sound classification framework which using the spectrogram images of sounds can be used efficiently to develop the sound classification and recognition systems.
Zheng, Shiji.  2021.  Network Intrusion Detection Model Based on Convolutional Neural Network. 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). 5:634–637.
Network intrusion detection is an important research direction of network security. The diversification of network intrusion mode and the increasing amount of network data make the traditional detection methods can not meet the requirements of the current network environment. The development of deep learning technology and its successful application in the field of artificial intelligence provide a new solution for network intrusion detection. In this paper, the convolutional neural network in deep learning is applied to network intrusion detection, and an intelligent detection model which can actively learn is established. The experiment on KDD99 data set shows that it can effectively improve the accuracy and adaptive ability of intrusion detection, and has certain effectiveness and advancement.
Wang, Xiaoyu, Han, Zhongshou, Yu, Rui.  2021.  Security Situation Prediction Method of Industrial Control Network Based on Ant Colony-RBF Neural Network. 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :834–837.
To understand the future trend of network security, the field of network security began to introduce the concept of NSSA(Network Security Situation Awareness). This paper implements the situation assessment model by using game theory algorithms to calculate the situation value of attack and defense behavior. After analyzing the ant colony algorithm and the RBF neural network, the defects of the RBF neural network are improved through the advantages of the ant colony algorithm, and the situation prediction model based on the ant colony-RBF neural network is realized. Finally, the model was verified experimentally.
Ortega, Alfonso, Fierrez, Julian, Morales, Aythami, Wang, Zilong, Ribeiro, Tony.  2021.  Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment. 2021 IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW). :78–87.
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods can become crucial. Inductive Logic Programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the process of data. Learning from Interpretation Transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given blackbox system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains.
Ranade, Priyanka, Piplai, Aritran, Mittal, Sudip, Joshi, Anupam, Finin, Tim.  2021.  Generating Fake Cyber Threat Intelligence Using Transformer-Based Models. 2021 International Joint Conference on Neural Networks (IJCNN). :1–9.
Cyber-defense systems are being developed to automatically ingest Cyber Threat Intelligence (CTI) that contains semi-structured data and/or text to populate knowledge graphs. A potential risk is that fake CTI can be generated and spread through Open-Source Intelligence (OSINT) communities or on the Web to effect a data poisoning attack on these systems. Adversaries can use fake CTI examples as training input to subvert cyber defense systems, forcing their models to learn incorrect inputs to serve the attackers' malicious needs. In this paper, we show how to automatically generate fake CTI text descriptions using transformers. Given an initial prompt sentence, a public language model like GPT-2 with fine-tuning can generate plausible CTI text that can mislead cyber-defense systems. We use the generated fake CTI text to perform a data poisoning attack on a Cybersecurity Knowledge Graph (CKG) and a cybersecurity corpus. The attack introduced adverse impacts such as returning incorrect reasoning outputs, representation poisoning, and corruption of other dependent AI-based cyber defense systems. We evaluate with traditional approaches and conduct a human evaluation study with cyber-security professionals and threat hunters. Based on the study, professional threat hunters were equally likely to consider our fake generated CTI and authentic CTI as true.
Xu, Xiaojun, Wang, Qi, Li, Huichen, Borisov, Nikita, Gunter, Carl A., Li, Bo.  2021.  Detecting AI Trojans Using Meta Neural Analysis. 2021 IEEE Symposium on Security and Privacy (SP). :103–120.
In machine learning Trojan attacks, an adversary trains a corrupted model that obtains good performance on normal data but behaves maliciously on data samples with certain trigger patterns. Several approaches have been proposed to detect such attacks, but they make undesirable assumptions about the attack strategies or require direct access to the trained models, which restricts their utility in practice.This paper addresses these challenges by introducing a Meta Neural Trojan Detection (MNTD) pipeline that does not make assumptions on the attack strategies and only needs black-box access to models. The strategy is to train a meta-classifier that predicts whether a given target model is Trojaned. To train the meta-model without knowledge of the attack strategy, we introduce a technique called jumbo learning that samples a set of Trojaned models following a general distribution. We then dynamically optimize a query set together with the meta-classifier to distinguish between Trojaned and benign models.We evaluate MNTD with experiments on vision, speech, tabular data and natural language text datasets, and against different Trojan attacks such as data poisoning attack, model manipulation attack, and latent attack. We show that MNTD achieves 97% detection AUC score and significantly outperforms existing detection approaches. In addition, MNTD generalizes well and achieves high detection performance against unforeseen attacks. We also propose a robust MNTD pipeline which achieves around 90% detection AUC even when the attacker aims to evade the detection with full knowledge of the system.
Cinà, Antonio Emanuele, Vascon, Sebastiano, Demontis, Ambra, Biggio, Battista, Roli, Fabio, Pelillo, Marcello.  2021.  The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers? 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time. Availability poisoning is a particularly worrisome subset of poisoning attacks where the attacker aims to cause a Denial-of-Service (DoS) attack. However, the state-of-the-art algorithms are computationally expensive because they try to solve a complex bi-level optimization problem (the ``hammer''). We observed that in particular conditions, namely, where the target model is linear (the ``nut''), the usage of computationally costly procedures can be avoided. We propose a counter-intuitive but efficient heuristic that allows contaminating the training set such that the target system's performance is highly compromised. We further suggest a re-parameterization trick to decrease the number of variables to be optimized. Finally, we demonstrate that, under the considered settings, our framework achieves comparable, or even better, performances in terms of the attacker's objective while being significantly more computationally efficient.
Mikhailova, Vasilisa D., Shulika, Maria G., Basan, Elena S., Peskova, Olga Yu..  2021.  Security architecture for UAV. 2021 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). :0431–0434.
Cyber-physical systems are used in many areas of human life. But people do not pay enough attention to ensuring the security of these systems. As a result of the resulting security gaps, an attacker can launch an attack, not only shutting down the system, but also having some negative impact on the environment. The article examines denial of service attacks in ad-hoc networks, conducts experiments and considers the consequences of their successful execution. As a result of the research, it was determined that an attack can be detected by changes in transmitted traffic and processor load. The cyber-physical system operates on stable algorithms, and even if legal changes occur, they can be easily distinguished from those caused by the attack. The article shows that the use of statistical methods for analyzing traffic and other parameters can be justified for detecting an attack. This study shows that each attack affects traffic in its own way and creates unique patterns of behavior change. The experiments were carried out according to methodology with changings in the intensity of the attacks, with a change in normal behavior. The results of this study can further be used to implement a system for detecting attacks on cyber-physical systems. The collected datasets can be used to train the neural network.
Cultice, Tyler, Ionel, Dan, Thapliyal, Himanshu.  2020.  Smart Home Sensor Anomaly Detection Using Convolutional Autoencoder Neural Network. 2020 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS). :67–70.
We propose an autoencoder based approach to anomaly detection in smart grid systems. Data collecting sensors within smart home systems are susceptible to many data corruption issues, such as malicious attacks or physical malfunctions. By applying machine learning to a smart home or grid, sensor anomalies can be detected automatically for secure data collection and sensor-based system functionality. In addition, we tested the effectiveness of this approach on real smart home sensor data collected for multiple years. An early detection of such data corruption issues is essential to the security and functionality of the various sensors and devices within a smart home.
Li, Gangqiang, Wu, Sissi Xiaoxiao, Zhang, Shengli, Li, Qiang.  2020.  Detect Insider Attacks Using CNN in Decentralized Optimization. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8758–8762.
This paper studies the security issue of a gossip-based distributed projected gradient (DPG) algorithm, when it is applied for solving a decentralized multi-agent optimization. It is known that the gossip-based DPG algorithm is vulnerable to insider attacks because each agent locally estimates its (sub)gradient without any supervision. This work leverages the convolutional neural network (CNN) to perform the detection and localization of the insider attackers. Compared to the previous work, CNN can learn appropriate decision functions from the original state information without preprocessing through artificially designed rules, thereby alleviating the dependence on complex pre-designed models. Simulation results demonstrate that the proposed CNN-based approach can effectively improve the performance of detecting and localizing malicious agents, as compared with the conventional pre-designed score-based model.
Yin, Yifei, Zulkernine, Farhana, Dahan, Samuel.  2020.  Determining Worker Type from Legal Text Data Using Machine Learning. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :444–450.
This project addresses a classic employment law question in Canada and elsewhere using machine learning approach: how do we know whether a worker is an employee or an independent contractor? This is a central issue for self-represented litigants insofar as these two legal categories entail very different rights and employment protections. In this interdisciplinary research study, we collaborated with the Conflict Analytics Lab to develop machine learning models aimed at determining whether a worker is an employee or an independent contractor. We present a number of supervised learning models including a neural network model that we implemented using data labeled by law researchers and compared the accuracy of the models. Our neural network model achieved an accuracy rate of 91.5%. A critical discussion follows to identify the key features in the data that influence the accuracy of our models and provide insights about the case outcomes.
Hu, Shengze, He, Chunhui, Ge, Bin, Liu, Fang.  2020.  Enhanced Word Embedding Method in Text Classification. 2020 6th International Conference on Big Data and Information Analytics (BigDIA). :18–22.
For the task of natural language processing (NLP), Word embedding technology has a certain impact on the accuracy of deep neural network algorithms. Considering that the current word embedding method cannot realize the coexistence of words and phrases in the same vector space. Therefore, we propose an enhanced word embedding (EWE) method. Before completing the word embedding, this method introduces a unique sentence reorganization technology to rewrite all the sentences in the original training corpus. Then, all the original corpus and the reorganized corpus are merged together as the training corpus of the distributed word embedding model, so as to realize the coexistence problem of words and phrases in the same vector space. We carried out experiment to demonstrate the effectiveness of the EWE algorithm on three classic benchmark datasets. The results show that the EWE method can significantly improve the classification performance of the CNN model.
Hou, Xiaolu, Breier, Jakub, Jap, Dirmanto, Ma, Lei, Bhasin, Shivam, Liu, Yang.  2020.  Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1–6.
Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
She, Dongdong, Chen, Yizheng, Shah, Abhishek, Ray, Baishakhi, Jana, Suman.  2020.  Neutaint: Efficient Dynamic Taint Analysis with Neural Networks. 2020 IEEE Symposium on Security and Privacy (SP). :1527–1543.
Dynamic taint analysis (DTA) is widely used by various applications to track information flow during runtime execution. Existing DTA techniques use rule-based taint-propagation, which is neither accurate (i.e., high false positive rate) nor efficient (i.e., large runtime overhead). It is hard to specify taint rules for each operation while covering all corner cases correctly. Moreover, the overtaint and undertaint errors can accumulate during the propagation of taint information across multiple operations. Finally, rule-based propagation requires each operation to be inspected before applying the appropriate rules resulting in prohibitive performance overhead on large real-world applications.In this work, we propose Neutaint, a novel end-to-end approach to track information flow using neural program embeddings. The neural program embeddings model the target's programs computations taking place between taint sources and sinks, which automatically learns the information flow by observing a diverse set of execution traces. To perform lightweight and precise information flow analysis, we utilize saliency maps to reason about most influential sources for different sinks. Neutaint constructs two saliency maps, a popular machine learning approach to influence analysis, to summarize both coarse-grained and fine-grained information flow in the neural program embeddings.We compare Neutaint with 3 state-of-the-art dynamic taint analysis tools. The evaluation results show that Neutaint can achieve 68% accuracy, on average, which is 10% improvement while reducing 40× runtime overhead over the second-best taint tool Libdft on 6 real world programs. Neutaint also achieves 61% more edge coverage when used for taint-guided fuzzing indicating the effectiveness of the identified influential bytes. We also evaluate Neutaint's ability to detect real world software attacks. The results show that Neutaint can successfully detect different types of vulnerabilities including buffer/heap/integer overflows, division by zero, etc. Lastly, Neutaint can detect 98.7% of total flows, the highest among all taint analysis tools.
Ma, Chuang, You, Haisheng, Wang, Li, Zhang, Jiajun.  2020.  Intelligent Cybersecurity Situational Awareness Model Based on Deep Neural Network. 2020 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :76–83.
In recent years, we have faced a series of online threats. The continuous malicious attacks on the network have directly caused a huge threat to the user's spirit and property. In order to deal with the complex security situation in today's network environment, an intelligent network situational awareness model based on deep neural networks is proposed. Use the nonlinear characteristics of the deep neural network to solve the nonlinear fitting problem, establish a network security situation assessment system, take the situation indicators output by the situation assessment system as a guide, and collect on the main data features according to the characteristics of the network attack method, the main data features are collected and the data is preprocessed. This model designs and trains a 4-layer neural network model, and then use the trained deep neural network model to understand and analyze the network situation data, so as to build the network situation perception model based on deep neural network. The deep neural network situational awareness model designed in this paper is used as a network situational awareness simulation attack prediction experiment. At the same time, it is compared with the perception model using gray theory and Support Vector Machine(SVM). The experiments show that this model can make perception according to the changes of state characteristics of network situation data, establish understanding through learning, and finally achieve accurate prediction of network attacks. Through comparison experiments, datatypized neural network deep neural network situation perception model is proved to be effective, accurate and superior.
Xu, Lan, Li, Jianwei, Dai, Li, Yu, Ningmei.  2020.  Hardware Trojans Detection Based on BP Neural Network. 2020 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA). :149–150.
This paper uses side channel analysis to detect hardware Trojan based on back propagation neural network. First, a power consumption collection platform is built to collect power waveforms, and the amplifier is utilized to amplify power consumption information to improve the detection accuracy. Then the small difference between the power waveforms is recognized by the back propagation neural network to achieve the purpose of detection. This method is validated on Advanced Encryption Standard circuit. Results show this method is able to identify the circuits with a Trojan occupied 0.19% of Advanced Encryption Standard circuit. And the detection accuracy rate can reach 100%.
Gouk, Henry, Hospedales, Timothy M..  2020.  Optimising Network Architectures for Provable Adversarial Robustness. 2020 Sensor Signal Processing for Defence Conference (SSPD). :1–5.
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any Lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
Zhong, Zhenyu, Hu, Zhisheng, Chen, Xiaowei.  2020.  Quantifying DNN Model Robustness to the Real-World Threats. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :150–157.
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.
Chen, Jianbo, Jordan, Michael I., Wainwright, Martin J..  2020.  HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. 2020 IEEE Symposium on Security and Privacy (SP). :1277–1294.
The goal of a decision-based adversarial attack on a trained model is to generate adversarial examples based solely on observing output labels returned by the targeted model. We develop HopSkipJumpAttack, a family of algorithms based on a novel estimate of the gradient direction using binary information at the decision boundary. The proposed family includes both untargeted and targeted attacks optimized for $\mathscrl$ and $\mathscrlınfty$ similarity metrics respectively. Theoretical analysis is provided for the proposed algorithms and the gradient direction estimate. Experiments show HopSkipJumpAttack requires significantly fewer model queries than several state-of-the-art decision-based adversarial attacks. It also achieves competitive performance in attacking several widely-used defense mechanisms.
Liu, Yuan, Zhou, Pingqiang.  2020.  Defending Against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit Plane Slicing. 2020 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :1–4.
Deep Neural Networks (DNNs) have been widely used in variety of fields with great success. However, recent researches indicate that DNNs are susceptible to adversarial attacks, which can easily fool the well-trained DNNs without being detected by human eyes. In this paper, we propose to combine the target DNN model with robust bit plane classifiers to defend against adversarial attacks. It comes from our finding that successful attacks generate imperceptible perturbations, which mainly affects the low-order bits of pixel value in clean images. Hence, using bit planes instead of traditional RGB channels for convolution can effectively reduce channel modification rate. We conduct experiments on dataset CIFAR-10 and GTSRB. The results show that our defense method can effectively increase the model accuracy on average from 8.72% to 85.99% under attacks on CIFAR-10 without sacrificina accuracy of clean images.