Visible to the public Biblio

Found 743 results

Filters: Keyword is machine learning  [Clear All Filters]
2021-05-03
Paulsen, Brandon, Wang, Jingbo, Wang, Jiawei, Wang, Chao.  2020.  NEURODIFF: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :784–796.
As neural networks make their way into safety-critical systems, where misbehavior can lead to catastrophes, there is a growing interest in certifying the equivalence of two structurally similar neural networks - a problem known as differential verification. For example, compression techniques are often used in practice for deploying trained neural networks on computationally- and energy-constrained devices, which raises the question of how faithfully the compressed network mimics the original network. Unfortunately, existing methods either focus on verifying a single network or rely on loose approximations to prove the equivalence of two networks. Due to overly conservative approximation, differential verification lacks scalability in terms of both accuracy and computational cost. To overcome these problems, we propose NEURODIFF, a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification on feed-forward ReLU networks while achieving many orders-of-magnitude speedup. NEURODIFF has two key contributions. The first one is new convex approximations that more accurately bound the difference of two networks under all possible inputs. The second one is judicious use of symbolic variables to represent neurons whose difference bounds have accumulated significant error. We find that these two techniques are complementary, i.e., when combined, the benefit is greater than the sum of their individual benefits. We have evaluated NEURODIFF on a variety of differential verification tasks. Our results show that NEURODIFF is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
Sohail, Muhammad, Zheng, Quan, Rezaiefar, Zeinab, Khan, Muhammad Alamgeer, Ullah, Rizwan, Tan, Xiaobin, Yang, Jian, Yuan, Liu.  2020.  Triangle Area Based Multivariate Correlation Analysis for Detecting and Mitigating Cache Pollution Attacks in Named Data Networking. 2020 3rd International Conference on Hot Information-Centric Networking (HotICN). :114–121.
The key feature of NDN is in-network caching that every router has its cache to store data for future use, thus improve the usage of the network bandwidth and reduce the network latency. However, in-network caching increases the security risks - cache pollution attacks (CPA), which includes locality disruption (ruining the cache locality by sending random requests for unpopular contents to make them popular) and False Locality (introducing unpopular contents in the router's cache by sending requests for a set of unpopular contents). In this paper, we propose a machine learning method, named Triangle Area Based Multivariate Correlation Analysis (TAB-MCA) that detects the cache pollution attacks in NDN. This detection system has two parts, the triangle-area-based MCA technique, and the threshold-based anomaly detection technique. The TAB-MCA technique is used to extract hidden geometrical correlations between two distinct features for all possible permutations and the threshold-based anomaly detection technique. This technique helps our model to be able to distinguish attacks from legitimate traffic records without requiring prior knowledge. Our technique detects locality disruption, false locality, and combination of the two with high accuracy. Implementation of XC-topology, the proposed method shows high efficiency in mitigating these attacks. In comparison to other ML-methods, our proposed method has a low overhead cost in mitigating CPA as it doesn't require attackers' prior knowledge. Additionally, our method can also detect non-uniform attack distributions.
2021-04-29
Hayes, J. Huffman, Payne, J., Essex, E., Cole, K., Alverson, J., Dekhtyar, A., Fang, D., Bernosky, G..  2020.  Towards Improved Network Security Requirements and Policy: Domain-Specific Completeness Analysis via Topic Modeling. 2020 IEEE Seventh International Workshop on Artificial Intelligence for Requirements Engineering (AIRE). :83—86.

Network security policies contain requirements - including system and software features as well as expected and desired actions of human actors. In this paper, we present a framework for evaluation of textual network security policies as requirements documents to identify areas for improvement. Specifically, our framework concentrates on completeness. We use topic modeling coupled with expert evaluation to learn the complete list of important topics that should be addressed in a network security policy. Using these topics as a checklist, we evaluate (students) a collection of network security policies for completeness, i.e., the level of presence of these topics in the text. We developed three methods for topic recognition to identify missing or poorly addressed topics. We examine network security policies and report the results of our analysis: preliminary success of our approach.

2021-04-27
Himthani, P., Dubey, G. P., Sharma, B. M., Taneja, A..  2020.  Big Data Privacy and Challenges for Machine Learning. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :707—713.

The field of Big Data is expanding at an alarming rate since its inception in 2012. The excessive use of Social Networking Sites, collection of Data from Sensors for analysis and prediction of future events, improvement in Customer Satisfaction on Online S hopping portals by monitoring their past behavior and providing them information, items and offers of their interest instantaneously, etc had led to this rise in the field of Big Data. This huge amount of data, if analyzed and processed properly, can lead to decisions and outcomes that would be of great values and benefits to organizations and individuals. Security of Data and Privacy of User is of keen interest and high importance for individuals, industry and academia. Everyone ensure that their Sensitive information must be kept away from unauthorized access and their assets must be kept safe from security breaches. Privacy and Security are also equally important for Big Data and here, it is typical and complex to ensure the Privacy and Security, as the amount of data is enormous. One possible option to effectively and efficiently handle, process and analyze the Big Data is to make use of Machine Learning techniques. Machine Learning techniques are straightforward; applying them on Big Data requires resolution of various issues and is a challenging task, as the size of Data is too big. This paper provides a brief introduction to Big Data, the importance of Security and Privacy in Big Data and the various challenges that are required to overcome for applying the Machine Learning techniques on Big Data.

Piplai, A., Ranade, P., Kotal, A., Mittal, S., Narayanan, S. N., Joshi, A..  2020.  Using Knowledge Graphs and Reinforcement Learning for Malware Analysis. 2020 IEEE International Conference on Big Data (Big Data). :2626—2633.

Machine learning algorithms used to detect attacks are limited by the fact that they cannot incorporate the back-ground knowledge that an analyst has. This limits their suitability in detecting new attacks. Reinforcement learning is different from traditional machine learning algorithms used in the cybersecurity domain. Compared to traditional ML algorithms, reinforcement learning does not need a mapping of the input-output space or a specific user-defined metric to compare data points. This is important for the cybersecurity domain, especially for malware detection and mitigation, as not all problems have a single, known, correct answer. Often, security researchers have to resort to guided trial and error to understand the presence of a malware and mitigate it.In this paper, we incorporate prior knowledge, represented as Cybersecurity Knowledge Graphs (CKGs), to guide the exploration of an RL algorithm to detect malware. CKGs capture semantic relationships between cyber-entities, including that mined from open source. Instead of trying out random guesses and observing the change in the environment, we aim to take the help of verified knowledge about cyber-attack to guide our reinforcement learning algorithm to effectively identify ways to detect the presence of malicious filenames so that they can be deleted to mitigate a cyber-attack. We show that such a guided system outperforms a base RL system in detecting malware.

Sharma, S., Zavarsky, P., Butakov, S..  2020.  Machine Learning based Intrusion Detection System for Web-Based Attacks. 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :227—230.

Various studies have been performed to explore the feasibility of detection of web-based attacks by machine learning techniques. False-positive and false-negative results have been reported as a major issue to be addressed to make machine learning-based detection and prevention of web-based attacks reliable and trustworthy. In our research, we tried to identify and address the root cause of the false-positive and false-negative results. In our experiment, we used the CSIC 2010 HTTP dataset, which contains the generated traffic targeted to an e-commerce web application. Our experimental results demonstrate that applying the proposed fine-tuned feature set extraction results in improved detection and classification of web-based attacks for all tested machine learning algorithms. The performance of the machine learning algorithm in the detection of attacks was evaluated by the Precision, Recall, Accuracy, and F-measure metrics. Among three tested algorithms, the J48 decision tree algorithm provided the highest True Positive rate, Precision, and Recall.

Tahsini, A., Dunstatter, N., Guirguis, M., Ahmed, C. M..  2020.  DeepBLOC: A Framework for Securing CPS through Deep Reinforcement Learning on Stochastic Games. 2020 IEEE Conference on Communications and Network Security (CNS). :1–9.

One important aspect in protecting Cyber Physical System (CPS) is ensuring that the proper control and measurement signals are propagated within the control loop. The CPS research community has been developing a large set of check blocks that can be integrated within the control loop to check signals against various types of attacks (e.g., false data injection attacks). Unfortunately, it is not possible to integrate all these “checks” within the control loop as the overhead introduced when checking signals may violate the delay constraints of the control loop. Moreover, these blocks do not completely operate in isolation of each other as dependencies exist among them in terms of their effectiveness against detecting a subset of attacks. Thus, it becomes a challenging and complex problem to assign the proper checks, especially with the presence of a rational adversary who can observe the check blocks assigned and optimizes her own attack strategies accordingly. This paper tackles the inherent state-action space explosion that arises in securing CPS through developing DeepBLOC (DB)-a framework in which Deep Reinforcement Learning algorithms are utilized to provide optimal/sub-optimal assignments of check blocks to signals. The framework models stochastic games between the adversary and the CPS defender and derives mixed strategies for assigning check blocks to ensure the integrity of the propagated signals while abiding to the real-time constraints dictated by the control loop. Through extensive simulation experiments and a real implementation on a water purification system, we show that DB achieves assignment strategies that outperform other strategies and heuristics.

Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., Shafique, M..  2020.  Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.
2021-04-09
Chytas, S. P., Maglaras, L., Derhab, A., Stamoulis, G..  2020.  Assessment of Machine Learning Techniques for Building an Efficient IDS. 2020 First International Conference of Smart Systems and Emerging Technologies (SMARTTECH). :165—170.
Intrusion Detection Systems (IDS) are the systems that detect and block any potential threats (e.g. DDoS attacks) in the network. In this project, we explore the performance of several machine learning techniques when used as parts of an IDS. We experiment with the CICIDS2017 dataset, one of the biggest and most complete IDS datasets in terms of having a realistic background traffic and incorporating a variety of cyber attacks. The techniques we present are applicable to any IDS dataset and can be used as a basis for deploying a real time IDS in complex environments.
Mishra, A., Yadav, P..  2020.  Anomaly-based IDS to Detect Attack Using Various Artificial Intelligence Machine Learning Algorithms: A Review. 2nd International Conference on Data, Engineering and Applications (IDEA). :1—7.
Cyber-attacks are becoming more complex & increasing tasks in accurate intrusion detection (ID). Failure to avoid intrusion can reduce the reliability of security services, for example, integrity, Privacy & availability of data. The rapid proliferation of computer networks (CNs) has reformed the perception of network security. Easily accessible circumstances affect computer networks from many threats by hackers. Threats to a network are many & hypothetically devastating. Researchers have recognized an Intrusion Detection System (IDS) up to identifying attacks into a wide variety of environments. Several approaches to intrusion detection, usually identified as Signature-based Intrusion Detection Systems (SIDS) & Anomaly-based Intrusion Detection Systems (AIDS), were proposed in the literature to address computer safety hazards. This survey paper grants a review of current IDS, complete analysis of prominent new works & generally utilized dataset to evaluation determinations. It also introduces avoidance techniques utilized by attackers to avoid detection. This paper delivers a description of AIDS for attack detection. IDS is an applied research area in artificial intelligence (AI) that uses multiple machine learning algorithms.
Lin, T., Shi, Y., Shu, N., Cheng, D., Hong, X., Song, J., Gwee, B. H..  2020.  Deep Learning-Based Image Analysis Framework for Hardware Assurance of Digital Integrated Circuits. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1—6.
We propose an Artificial Intelligence (AI)/Deep Learning (DL)-based image analysis framework for hardware assurance of digital integrated circuits (ICs). Our aim is to examine and verify various hardware information from analyzing the Scanning Electron Microscope (SEM) images of an IC. In our proposed framework, we apply DL-based methods at all essential steps of the analysis. To the best of our knowledge, this is the first such framework that makes heavy use of DL-based methods at all essential analysis steps. Further, to reduce time and effort required in model re-training, we propose and demonstrate various automated or semi-automated training data preparation methods and demonstrate the effectiveness of using synthetic data to train a model. By applying our proposed framework to analyzing a set of SEM images of a large digital IC, we prove its efficacy. Our DL-based methods are fast, accurate, robust against noise, and can automate tasks that were previously performed mainly manually. Overall, we show that DL-based methods can largely increase the level of automation in hardware assurance of digital ICs and improve its accuracy.
2021-04-08
Rhee, K. H..  2020.  Composition of Visual Feature Vector Pattern for Deep Learning in Image Forensics. IEEE Access. 8:188970—188980.

In image forensics, to determine whether the image is impurely transformed, it extracts and examines the features included in the suspicious image. In general, the features extracted for the detection of forgery images are based on numerical values, so it is somewhat unreasonable to use in the CNN structure for image classification. In this paper, the extraction method of a feature vector is using a least-squares solution. Treat a suspicious image like a matrix and its solution to be coefficients as the feature vector. Get two solutions from two images of the original and its median filter residual (MFR). Subsequently, the two features were formed into a visualized pattern and then fed into CNN deep learning to classify the various transformed images. A new structure of the CNN net layer was also designed by hybrid with the inception module and the residual block to classify visualized feature vector patterns. The performance of the proposed image forensics detection (IFD) scheme was measured with the seven transformed types of image: average filtered (window size: 3 × 3), gaussian filtered (window size: 3 × 3), JPEG compressed (quality factor: 90, 70), median filtered (window size: 3 × 3, 5 × 5), and unaltered. The visualized patterns are fed into the image input layer of the designed CNN hybrid model. Throughout the experiment, the accuracy of median filtering detection was 98% over. Also, the area under the curve (AUC) by sensitivity (TP: true positive rate) and 1-specificity (FP: false positive rate) results of the proposed IFD scheme approached to `1' on the designed CNN hybrid model. Experimental results show high efficiency and performance to classify the various transformed images. Therefore, the grade evaluation of the proposed scheme is “Excellent (A)”.

Sarma, M. S., Srinivas, Y., Abhiram, M., Ullala, L., Prasanthi, M. S., Rao, J. R..  2017.  Insider Threat Detection with Face Recognition and KNN User Classification. 2017 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM). :39—44.
Information Security in cloud storage is a key trepidation with regards to Degree of Trust and Cloud Penetration. Cloud user community needs to ascertain performance and security via QoS. Numerous models have been proposed [2] [3] [6][7] to deal with security concerns. Detection and prevention of insider threats are concerns that also need to be tackled. Since the attacker is aware of sensitive information, threats due to cloud insider is a grave concern. In this paper, we have proposed an authentication mechanism, which performs authentication based on verifying facial features of the cloud user, in addition to username and password, thereby acting as two factor authentication. New QoS has been proposed which is capable of monitoring and detection of insider threats using Machine Learning Techniques. KNN Classification Algorithm has been used to classify users into legitimate, possibly legitimate, possibly not legitimate and not legitimate groups to verify image authenticity to conclude, whether there is any possible insider threat. A threat detection model has also been proposed for insider threats, which utilizes Facial recognition and Monitoring models. Security Method put forth in [6] [7] is honed to include threat detection QoS to earn higher degree of trust from cloud user community. As a recommendation, Threat detection module should be harnessed in private cloud deployments like Defense and Pharma applications. Experimentation has been conducted using open source Machine Learning libraries and results have been attached in this paper.
2021-03-30
Foroughi, F., Hadipour, H., Shafiee, A. M..  2020.  High-Performance Monitoring Sensors for Home Computer Users Security Profiling. 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1—7.

Recognising user's risky behaviours in real-time is an important element of providing appropriate solutions and recommending suitable actions for responding to cybersecurity threats. Employing user modelling and machine learning can make this process automated by requires high-performance intelligent agent to create the user security profile. User profiling is the process of producing a profile of the user from historical information and past details. This research tries to identify the monitoring factors and suggests a novel observation solution to create high-performance sensors to generate the user security profile for a home user concerning the user's privacy. This observer agent helps to create a decision-making model that influences the user's decision following real-time threats or risky behaviours.

Tai, J., Alsmadi, I., Zhang, Y., Qiao, F..  2020.  Machine Learning Methods for Anomaly Detection in Industrial Control Systems. 2020 IEEE International Conference on Big Data (Big Data). :2333—2339.

This paper examines multiple machine learning models to find the model that best indicates anomalous activity in an industrial control system that is under a software-based attack. The researched machine learning models are Random Forest, Gradient Boosting Machine, Artificial Neural Network, and Recurrent Neural Network classifiers built-in Python and tested against the HIL-based Augmented ICS dataset. Although the results showed that Random Forest, Gradient Boosting Machine, Artificial Neural Network, and Long Short-Term Memory classification models have great potential for anomaly detection in industrial control systems, we found that Random Forest with tuned hyperparameters slightly outperformed the other models.

Pyatnisky, I. A., Sokolov, A. N..  2020.  Assessment of the Applicability of Autoencoders in the Problem of Detecting Anomalies in the Work of Industrial Control Systems.. 2020 Global Smart Industry Conference (GloSIC). :234—239.

Deep learning methods are increasingly becoming solutions to complex problems, including the search for anomalies. While fully-connected and convolutional neural networks have already found their application in classification problems, their applicability to the problem of detecting anomalies is limited. In this regard, it is proposed to use autoencoders, previously used only in problems of reducing the dimension and removing noise, as a method for detecting anomalies in the industrial control system. A new method based on autoencoders is proposed for detecting anomalies in the operation of industrial control systems (ICS). Several neural networks based on auto-encoders with different architectures were trained, and the effectiveness of each of them in the problem of detecting anomalies in the work of process control systems was evaluated. Auto-encoders can detect the most complex and non-linear dependencies in the data, and as a result, can show the best quality for detecting anomalies. In some cases, auto-encoders require fewer machine resources.

Kuchar, K., Fujdiak, R., Blazek, P., Martinasek, Z., Holasova, E..  2020.  Simplified Method for Fast and Efficient Incident Detection in Industrial Networks. 2020 4th Cyber Security in Networking Conference (CSNet). :1—3.

This article is focused on industrial networks and their security. An industrial network typically works with older devices that do not provide security at the level of today's requirements. Even protocols often do not support security at a sufficient level. It is necessary to deal with these security issues due to digitization. It is therefore required to provide other techniques that will help with security. For this reason, it is possible to deploy additional elements that will provide additional security and ensure the monitoring of the network, such as the Intrusion Detection System. These systems recognize identified signatures and anomalies. Methods of detecting security incidents by detecting anomalies in network traffic are described. The proposed methods are focused on detecting DoS attacks in the industrial Modbus protocol and operations performed outside the standard interval in the Distributed Network Protocol 3. The functionality of the performed methods is tested in the IDS system Zeek.

2021-03-29
Shaout, A., Schmidt, N..  2020.  Keystroke Identifier Using Fuzzy Logic to Increase Password Security. 2020 21st International Arab Conference on Information Technology (ACIT). :1—8.

Cybersecurity is a major issue today. It is predicted that cybercrime will cost the world \$6 trillion annually by 2021. It is important to make logins secure as well as to make advances in security in order to catch cybercriminals. This paper will design and create a device that will use Fuzzy logic to identify a person by the rhythm and frequency of their typing. The device will take data from a user from a normal password entry session. This data will be used to make a Fuzzy system that will be able to identify the user by their typing speed. An application of this project could be used to make a more secure log-in system for a user. The log-in system would not only check that the correct password was entered but also that the rhythm of how the password was typed matched the user. Another application of this system could be used to help catch cybercriminals. A cybercriminal may have a certain rhythm at which they type at and this could be used like a fingerprint to help officials locate cybercriminals.

Pranav, E., Kamal, S., Chandran, C. Satheesh, Supriya, M. H..  2020.  Facial Emotion Recognition Using Deep Convolutional Neural Network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :317—320.

The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.

Moti, Z., Hashemi, S., Jahromi, A. N..  2020.  A Deep Learning-based Malware Hunting Technique to Handle Imbalanced Data. 2020 17th International ISC Conference on Information Security and Cryptology (ISCISC). :48–53.
Nowadays, with the increasing use of computers and the Internet, more people are exposed to cyber-security dangers. According to antivirus companies, malware is one of the most common threats of using the Internet. Therefore, providing a practical solution is critical. Current methods use machine learning approaches to classify malware samples automatically. Despite the success of these approaches, the accuracy and efficiency of these techniques are still inadequate, especially for multiple class classification problems and imbalanced training data sets. To mitigate this problem, we use deep learning-based algorithms for classification and generation of new malware samples. Our model is based on the opcode sequences, which are given to the model without any pre-processing. Besides, we use a novel generative adversarial network to generate new opcode sequences for oversampling minority classes. Also, we propose the model that is a combination of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) to classify malware samples. CNN is used to consider short-term dependency between features; while, LSTM is used to consider longer-term dependence. The experiment results show our method could classify malware to their corresponding family effectively. Our model achieves 98.99% validation accuracy.
Olaimat, M. Al, Lee, D., Kim, Y., Kim, J., Kim, J..  2020.  A Learning-based Data Augmentation for Network Anomaly Detection. 2020 29th International Conference on Computer Communications and Networks (ICCCN). :1–10.
While machine learning technologies have been remarkably advanced over the past several years, one of the fundamental requirements for the success of learning-based approaches would be the availability of high-quality data that thoroughly represent individual classes in a problem space. Unfortunately, it is not uncommon to observe a significant degree of class imbalance with only a few instances for minority classes in many datasets, including network traffic traces highly skewed toward a large number of normal connections while very small in quantity for attack instances. A well-known approach to addressing the class imbalance problem is data augmentation that generates synthetic instances belonging to minority classes. However, traditional statistical techniques may be limited since the extended data through statistical sampling should have the same density as original data instances with a minor degree of variation. This paper takes a learning-based approach to data augmentation to enable effective network anomaly detection. One of the critical challenges for the learning-based approach is the mode collapse problem resulting in a limited diversity of samples, which was also observed from our preliminary experimental result. To this end, we present a novel "Divide-Augment-Combine" (DAC) strategy, which groups the instances based on their characteristics and augments data on a group basis to represent a subset independently using a generative adversarial model. Our experimental results conducted with two recently collected public network datasets (UNSW-NB15 and IDS-2017) show that the proposed technique enhances performances up to 21.5% for identifying network anomalies.
Yilmaz, I., Masum, R., Siraj, A..  2020.  Addressing Imbalanced Data Problem with Generative Adversarial Network For Intrusion Detection. 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI). :25–30.

Machine learning techniques help to understand underlying patterns in datasets to develop defense mechanisms against cyber attacks. Multilayer Perceptron (MLP) technique is a machine learning technique used in detecting attack vs. benign data. However, it is difficult to construct any effective model when there are imbalances in the dataset that prevent proper classification of attack samples in data. In this research, we use UGR'16 dataset to conduct data wrangling initially. This technique helps to prepare a test set from the original dataset to train the neural network model effectively. We experimented with a series of inputs of varying sizes (i.e. 10000, 50000, 1 million) to observe the performance of the MLP neural network model with distribution of features over accuracy. Later, we use Generative Adversarial Network (GAN) model that produces samples of different attack labels (e.g. blacklist, anomaly spam, ssh scan) for balancing the dataset. These samples are generated based on data from the UGR'16 dataset. Further experiments with MLP neural network model shows that a balanced attack sample dataset, made possible with GAN, produces more accurate results than an imbalanced one.

Peng, Y., Fu, G., Luo, Y., Hu, J., Li, B., Yan, Q..  2020.  Detecting Adversarial Examples for Network Intrusion Detection System with GAN. 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). :6–10.
With the increasing scale of network, attacks against network emerge one after another, and security problems become increasingly prominent. Network intrusion detection system is a widely used and effective security means at present. In addition, with the development of machine learning technology, various intelligent intrusion detection algorithms also start to sprout. By flexibly combining these intelligent methods with intrusion detection technology, the comprehensive performance of intrusion detection can be improved, but the vulnerability of machine learning model in the adversarial environment can not be ignored. In this paper, we study the defense problem of network intrusion detection system against adversarial samples. More specifically, we design a defense algorithm for NIDS against adversarial samples by using bidirectional generative adversarial network. The generator learns the data distribution of normal samples during training, which is an implicit model reflecting the normal data distribution. After training, the adversarial sample detection module calculates the reconstruction error and the discriminator matching error of sample. Then, the adversarial samples are removed, which improves the robustness and accuracy of NIDS in the adversarial environment.
Chauhan, R., Heydari, S. Shah.  2020.  Polymorphic Adversarial DDoS attack on IDS using GAN. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.
Intrusion Detection systems are important tools in preventing malicious traffic from penetrating into networks and systems. Recently, Intrusion Detection Systems are rapidly enhancing their detection capabilities using machine learning algorithms. However, these algorithms are vulnerable to new unknown types of attacks that can evade machine learning IDS. In particular, they may be vulnerable to attacks based on Generative Adversarial Networks (GAN). GANs have been widely used in domains such as image processing, natural language processing to generate adversarial data of different types such as graphics, videos, texts, etc. We propose a model using GAN to generate adversarial DDoS attacks that can change the attack profile and can be undetected. Our simulation results indicate that by continuous changing of attack profile, defensive systems that use incremental learning will still be vulnerable to new attacks.
2021-03-22
Penugonda, S., Yong, S., Gao, A., Cai, K., Sen, B., Fan, J..  2020.  Generic Modeling of Differential Striplines Using Machine Learning Based Regression Analysis. 2020 IEEE International Symposium on Electromagnetic Compatibility Signal/Power Integrity (EMCSI). :226–230.
In this paper, a generic model for a differential stripline is created using machine learning (ML) based regression analysis. A recursive approach of creating various inputs is adapted instead of traditional design of experiments (DoE) approach. This leads to reduction of number of simulations as well as control the data points required for performing simulations. The generic model is developed using 48 simulations. It is comparable to the linear regression model, which is obtained using 1152 simulations. Additionally, a tabular W-element model of a differential stripline is used to take into consideration the frequency-dependent dielectric loss. In order to demonstrate the expandability of this approach, the methodology was applied to two differential pairs of striplines in the frequency range of 10 MHz to 20 GHz.