Visible to the public Biblio

Found 730 results

Filters: Keyword is machine learning  [Clear All Filters]
2021-05-13
Tong, Zhongkai, Zhu, Ziyuan, Wang, Zhanpeng, Wang, Limin, Zhang, Yusha, Liu, Yuxin.  2020.  Cache side-channel attacks detection based on machine learning. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :919—926.
Security has always been one of the main concerns in the field of computer architecture and cloud computing. Cache-based side-channel attacks pose a threat to almost all existing architectures and cloud computing. Especially in the public cloud, the cache is shared among multiple tenants, and cache attacks can make good use of this to extract information. Cache side-channel attacks are a problem to be solved for security, in which how to accurately detect cache side-channel attacks has been a research hotspot. Because the cache side-channel attack does not require the attacker to physically contact the target device and does not need additional devices to obtain the side channel information, the cache-side channel attack is efficient and hidden, which poses a great threat to the security of cryptographic algorithms. Based on the AES algorithm, this paper uses hardware performance counters to obtain the features of different cache events under Flush + Reload, Prime + Probe, and Flush + Flush attacks. Firstly, the random forest algorithm is used to filter the cache features, and then the support vector machine algorithm is used to model the system. Finally, high detection accuracy is achieved under different system loads. The detection accuracy of the system is 99.92% when there is no load, the detection accuracy is 99.85% under the average load, and the detection accuracy under full load is 96.57%.
Hachimi, Marouane, Kaddoum, Georges, Gagnon, Ghyslain, Illy, Poulmanogo.  2020.  Multi-stage Jamming Attacks Detection using Deep Learning Combined with Kernelized Support Vector Machine in 5G Cloud Radio Access Networks. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1—5.
In 5G networks, the Cloud Radio Access Network (C-RAN) is considered a promising future architecture in terms of minimizing energy consumption and allocating resources efficiently by providing real-time cloud infrastructures, cooperative radio, and centralized data processing. Recently, given their vulnerability to malicious attacks, the security of C-RAN networks has attracted significant attention. Among various anomaly-based intrusion detection techniques, the most promising one is the machine learning-based intrusion detection as it learns without human assistance and adjusts actions accordingly. In this direction, many solutions have been proposed, but they show either low accuracy in terms of attack classification or they offer just a single layer of attack detection. This research focuses on deploying a multi-stage machine learning-based intrusion detection (ML-IDS) in 5G C-RAN that can detect and classify four types of jamming attacks: constant jamming, random jamming, deceptive jamming, and reactive jamming. This deployment enhances security by minimizing the false negatives in C-RAN architectures. The experimental evaluation of the proposed solution is carried out using WSN-DS (Wireless Sensor Networks DataSet), which is a dedicated wireless dataset for intrusion detection. The final classification accuracy of attacks is 94.51% with a 7.84% false negative rate.
Fei, Wanghao, Moses, Paul, Davis, Chad.  2020.  Identification of Smart Grid Attacks via State Vector Estimator and Support Vector Machine Methods. 2020 Intermountain Engineering, Technology and Computing (IETC). :1—6.
In recent times, an increasing amount of intelligent electronic devices (IEDs) are being deployed to make power systems more reliable and economical. While these technologies are necessary for realizing a cyber-physical infrastructure for future smart power grids, they also introduce new vulnerabilities in the grid to different cyber-attacks. Traditional methods such as state vector estimation (SVE) are not capable of identifying cyber-attacks while the geometric information is also injected as an attack vector. In this paper, a machine learning based smart grid attack identification method is proposed. The proposed method is carried out by first collecting smart grid power flow data for machine learning training purposes which is later used to classify the attacks. The performance of both the proposed SVM method and the traditional SVE method are validated on IEEE 14, 30, 39, 57 and 118 bus systems, and the performance regarding the scale of the power system is evaluated. The results show that the SVM-based method performs better than the SVE-based in attack identification over a much wider scale of power systems.
Jaafar, Fehmi, Avellaneda, Florent, Alikacem, El-Hackemi.  2020.  Demystifying the Cyber Attribution: An Exploratory Study. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :35–40.
Current cyber attribution approaches proposed to use a variety of datasets and analytical techniques to distill the information that will be useful to identify cyber attackers. In contrast, practitioners and researchers in cyber attribution face several technical and regulation challenges. In this paper, we describe the main challenges of cyber attribution and present a state of the art of used approaches to face these challenges. Then, we will present an exploratory study to perform cyber attacks attribution based on pattern recognition from real data. In our study, we are using attack pattern discovery and identification based on real data collection and analysis.
Fernandes, Steven, Raj, Sunny, Ewetz, Rickard, Pannu, Jodh Singh, Kumar Jha, Sumit, Ortiz, Eddy, Vintila, Iustina, Salter, Margaret.  2020.  Detecting Deepfake Videos using Attribution-Based Confidence Metric. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :1250–1259.
Recent advances in generative adversarial networks have made detecting fake videos a challenging task. In this paper, we propose the application of the state-of-the-art attribution based confidence (ABC) metric for detecting deepfake videos. The ABC metric does not require access to the training data or training the calibration model on the validation data. The ABC metric can be used to draw inferences even when only the trained model is available. Here, we utilize the ABC metric to characterize whether a video is original or fake. The deep learning model is trained only on original videos. The ABC metric uses the trained model to generate confidence values. For, original videos, the confidence values are greater than 0.94.
Venceslai, Valerio, Marchisio, Alberto, Alouani, Ihsen, Martina, Maurizio, Shafique, Muhammad.  2020.  NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.
2021-05-05
Kumar, Rahul, Sethi, Kamalakanta, Prajapati, Nishant, Rout, Rashmi Ranjan, Bera, Padmalochan.  2020.  Machine Learning based Malware Detection in Cloud Environment using Clustering Approach. 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—7.

Enforcing security and resilience in a cloud platform is an essential but challenging problem due to the presence of a large number of heterogeneous applications running on shared resources. A security analysis system that can detect threats or malware must exist inside the cloud infrastructure. Much research has been done on machine learning-driven malware analysis, but it is limited in computational complexity and detection accuracy. To overcome these drawbacks, we proposed a new malware detection system based on the concept of clustering and trend micro locality sensitive hashing (TLSH). We used Cuckoo sandbox, which provides dynamic analysis reports of files by executing them in an isolated environment. We used a novel feature extraction algorithm to extract essential features from the malware reports obtained from the Cuckoo sandbox. Further, the most important features are selected using principal component analysis (PCA), random forest, and Chi-square feature selection methods. Subsequently, the experimental results are obtained for clustering and non-clustering approaches on three classifiers, including Decision Tree, Random Forest, and Logistic Regression. The model performance shows better classification accuracy and false positive rate (FPR) as compared to the state-of-the-art works and non-clustering approach at significantly lesser computation cost.

Rathod, Jash, Joshi, Chaitali, Khochare, Janavi, Kazi, Faruk.  2020.  Interpreting a Black-Box Model used for SCADA Attack detection in Gas Pipelines Control System. 2020 IEEE 17th India Council International Conference (INDICON). :1—7.
Various Machine Learning techniques are considered to be "black-boxes" because of their limited interpretability and explainability. This cannot be afforded, especially in the domain of Cyber-Physical Systems, where there can be huge losses of infrastructure of industries and Governments. Supervisory Control And Data Acquisition (SCADA) systems need to detect and be protected from cyber-attacks. Thus, we need to adopt approaches that make the system secure, can explain predictions made by model, and interpret the model in a human-understandable format. Recently, Autoencoders have shown great success in attack detection in SCADA systems. Numerous interpretable machine learning techniques are developed to help us explain and interpret models. The work presented here is a novel approach to use techniques like Local Interpretable Model-Agnostic Explanations (LIME) and Layer-wise Relevance Propagation (LRP) for interpretation of Autoencoder networks trained on a Gas Pipelines Control System to detect attacks in the system.
Lee, Jae-Myeong, Hong, Sugwon.  2020.  Host-Oriented Approach to Cyber Security for the SCADA Systems. 2020 6th IEEE Congress on Information Science and Technology (CiSt). :151—155.
Recent cyberattacks targeting Supervisory Control and Data Acquisition (SCADA)/Industrial Control System(ICS) exploit weaknesses of host system software environment and take over the control of host processes in the host of the station network. We analyze the attack path of these attacks, which features how the attack hijacks the host in the network and compromises the operations of field device controllers. The paper proposes a host-based protection method, which can prevent malware penetration into the process memory by code injection attacks. The method consists of two protection schemes. One is to prevent file-based code injection such as DLL injection. The other is to prevent fileless code injection. The method traces changes in memory regions and determine whether the newly allocated memory is written with malicious codes. For this method, we show how a machine learning method can be adopted.
Bulle, Bruno B., Santin, Altair O., Viegas, Eduardo K., dos Santos, Roger R..  2020.  A Host-based Intrusion Detection Model Based on OS Diversity for SCADA. IECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society. :691—696.

Supervisory Control and Data Acquisition (SCADA) systems have been a frequent target of cyberattacks in Industrial Control Systems (ICS). As such systems are a frequent target of highly motivated attackers, researchers often resort to intrusion detection through machine learning techniques to detect new kinds of threats. However, current research initiatives, in general, pursue higher detection accuracies, neglecting the detection of new kind of threats and their proposal detection scope. This paper proposes a novel, reliable host-based intrusion detection for SCADA systems through the Operating System (OS) diversity. Our proposal evaluates, at the OS level, the SCADA communication over time and, opportunistically, detects, and chooses the most appropriate OS to be used in intrusion detection for reliability purposes. Experiments, performed through a variety of SCADA OSs front-end, shows that OS diversity provides higher intrusion detection scope, improving detection accuracy by up to 8 new attack categories. Besides, our proposal can opportunistically detect the most reliable OS that should be used for the current environment behavior, improving by up to 8%, on average, the system accuracy when compared to a single OS approach, in the best case.

Pawar, Shrikant, Stanam, Aditya.  2020.  Scalable, Reliable and Robust Data Mining Infrastructures. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :123—125.

Mining of data is used to analyze facts to discover formerly unknown patterns, classifying and grouping the records. There are several crucial scalable statistics mining platforms that have been developed in latest years. RapidMiner is a famous open source software which can be used for advanced analytics, Weka and Orange are important tools of machine learning for classifying patterns with techniques of clustering and regression, whilst Knime is often used for facts preprocessing like information extraction, transformation and loading. This article encapsulates the most important and robust platforms.

Hasan, Tooba, Adnan, Akhunzada, Giannetsos, Thanassis, Malik, Jahanzaib.  2020.  Orchestrating SDN Control Plane towards Enhanced IoT Security. 2020 6th IEEE Conference on Network Softwarization (NetSoft). :457—464.

The Internet of Things (IoT) is rapidly evolving, while introducing several new challenges regarding security, resilience and operational assurance. In the face of an increasing attack landscape, it is necessary to cater for the provision of efficient mechanisms to collectively detect sophisticated malware resulting in undesirable (run-time) device and network modifications. This is not an easy task considering the dynamic and heterogeneous nature of IoT environments; i.e., different operating systems, varied connected networks and a wide gamut of underlying protocols and devices. Malicious IoT nodes or gateways can potentially lead to the compromise of the whole IoT network infrastructure. On the other hand, the SDN control plane has the capability to be orchestrated towards providing enhanced security services to all layers of the IoT networking stack. In this paper, we propose an SDN-enabled control plane based orchestration that leverages emerging Long Short-Term Memory (LSTM) classification models; a Deep Learning (DL) based architecture to combat malicious IoT nodes. It is a first step towards a new line of security mechanisms that enables the provision of scalable AI-based intrusion detection focusing on the operational assurance of only those specific, critical infrastructure components,thus, allowing for a much more efficient security solution. The proposed mechanism has been evaluated with current state of the art datasets (i.e., N\_BaIoT 2018) using standard performance evaluation metrics. Our preliminary results show an outstanding detection accuracy (i.e., 99.9%) which significantly outperforms state-of-the-art approaches. Based on our findings, we posit open issues and challenges, and discuss possible ways to address them, so that security does not hinder the deployment of intelligent IoT-based computing systems.

2021-05-03
Paulsen, Brandon, Wang, Jingbo, Wang, Jiawei, Wang, Chao.  2020.  NEURODIFF: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :784–796.
As neural networks make their way into safety-critical systems, where misbehavior can lead to catastrophes, there is a growing interest in certifying the equivalence of two structurally similar neural networks - a problem known as differential verification. For example, compression techniques are often used in practice for deploying trained neural networks on computationally- and energy-constrained devices, which raises the question of how faithfully the compressed network mimics the original network. Unfortunately, existing methods either focus on verifying a single network or rely on loose approximations to prove the equivalence of two networks. Due to overly conservative approximation, differential verification lacks scalability in terms of both accuracy and computational cost. To overcome these problems, we propose NEURODIFF, a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification on feed-forward ReLU networks while achieving many orders-of-magnitude speedup. NEURODIFF has two key contributions. The first one is new convex approximations that more accurately bound the difference of two networks under all possible inputs. The second one is judicious use of symbolic variables to represent neurons whose difference bounds have accumulated significant error. We find that these two techniques are complementary, i.e., when combined, the benefit is greater than the sum of their individual benefits. We have evaluated NEURODIFF on a variety of differential verification tasks. Our results show that NEURODIFF is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
Sohail, Muhammad, Zheng, Quan, Rezaiefar, Zeinab, Khan, Muhammad Alamgeer, Ullah, Rizwan, Tan, Xiaobin, Yang, Jian, Yuan, Liu.  2020.  Triangle Area Based Multivariate Correlation Analysis for Detecting and Mitigating Cache Pollution Attacks in Named Data Networking. 2020 3rd International Conference on Hot Information-Centric Networking (HotICN). :114–121.
The key feature of NDN is in-network caching that every router has its cache to store data for future use, thus improve the usage of the network bandwidth and reduce the network latency. However, in-network caching increases the security risks - cache pollution attacks (CPA), which includes locality disruption (ruining the cache locality by sending random requests for unpopular contents to make them popular) and False Locality (introducing unpopular contents in the router's cache by sending requests for a set of unpopular contents). In this paper, we propose a machine learning method, named Triangle Area Based Multivariate Correlation Analysis (TAB-MCA) that detects the cache pollution attacks in NDN. This detection system has two parts, the triangle-area-based MCA technique, and the threshold-based anomaly detection technique. The TAB-MCA technique is used to extract hidden geometrical correlations between two distinct features for all possible permutations and the threshold-based anomaly detection technique. This technique helps our model to be able to distinguish attacks from legitimate traffic records without requiring prior knowledge. Our technique detects locality disruption, false locality, and combination of the two with high accuracy. Implementation of XC-topology, the proposed method shows high efficiency in mitigating these attacks. In comparison to other ML-methods, our proposed method has a low overhead cost in mitigating CPA as it doesn't require attackers' prior knowledge. Additionally, our method can also detect non-uniform attack distributions.
2021-04-29
Hayes, J. Huffman, Payne, J., Essex, E., Cole, K., Alverson, J., Dekhtyar, A., Fang, D., Bernosky, G..  2020.  Towards Improved Network Security Requirements and Policy: Domain-Specific Completeness Analysis via Topic Modeling. 2020 IEEE Seventh International Workshop on Artificial Intelligence for Requirements Engineering (AIRE). :83—86.

Network security policies contain requirements - including system and software features as well as expected and desired actions of human actors. In this paper, we present a framework for evaluation of textual network security policies as requirements documents to identify areas for improvement. Specifically, our framework concentrates on completeness. We use topic modeling coupled with expert evaluation to learn the complete list of important topics that should be addressed in a network security policy. Using these topics as a checklist, we evaluate (students) a collection of network security policies for completeness, i.e., the level of presence of these topics in the text. We developed three methods for topic recognition to identify missing or poorly addressed topics. We examine network security policies and report the results of our analysis: preliminary success of our approach.

2021-04-27
Himthani, P., Dubey, G. P., Sharma, B. M., Taneja, A..  2020.  Big Data Privacy and Challenges for Machine Learning. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :707—713.

The field of Big Data is expanding at an alarming rate since its inception in 2012. The excessive use of Social Networking Sites, collection of Data from Sensors for analysis and prediction of future events, improvement in Customer Satisfaction on Online S hopping portals by monitoring their past behavior and providing them information, items and offers of their interest instantaneously, etc had led to this rise in the field of Big Data. This huge amount of data, if analyzed and processed properly, can lead to decisions and outcomes that would be of great values and benefits to organizations and individuals. Security of Data and Privacy of User is of keen interest and high importance for individuals, industry and academia. Everyone ensure that their Sensitive information must be kept away from unauthorized access and their assets must be kept safe from security breaches. Privacy and Security are also equally important for Big Data and here, it is typical and complex to ensure the Privacy and Security, as the amount of data is enormous. One possible option to effectively and efficiently handle, process and analyze the Big Data is to make use of Machine Learning techniques. Machine Learning techniques are straightforward; applying them on Big Data requires resolution of various issues and is a challenging task, as the size of Data is too big. This paper provides a brief introduction to Big Data, the importance of Security and Privacy in Big Data and the various challenges that are required to overcome for applying the Machine Learning techniques on Big Data.

Piplai, A., Ranade, P., Kotal, A., Mittal, S., Narayanan, S. N., Joshi, A..  2020.  Using Knowledge Graphs and Reinforcement Learning for Malware Analysis. 2020 IEEE International Conference on Big Data (Big Data). :2626—2633.

Machine learning algorithms used to detect attacks are limited by the fact that they cannot incorporate the back-ground knowledge that an analyst has. This limits their suitability in detecting new attacks. Reinforcement learning is different from traditional machine learning algorithms used in the cybersecurity domain. Compared to traditional ML algorithms, reinforcement learning does not need a mapping of the input-output space or a specific user-defined metric to compare data points. This is important for the cybersecurity domain, especially for malware detection and mitigation, as not all problems have a single, known, correct answer. Often, security researchers have to resort to guided trial and error to understand the presence of a malware and mitigate it.In this paper, we incorporate prior knowledge, represented as Cybersecurity Knowledge Graphs (CKGs), to guide the exploration of an RL algorithm to detect malware. CKGs capture semantic relationships between cyber-entities, including that mined from open source. Instead of trying out random guesses and observing the change in the environment, we aim to take the help of verified knowledge about cyber-attack to guide our reinforcement learning algorithm to effectively identify ways to detect the presence of malicious filenames so that they can be deleted to mitigate a cyber-attack. We show that such a guided system outperforms a base RL system in detecting malware.

Sharma, S., Zavarsky, P., Butakov, S..  2020.  Machine Learning based Intrusion Detection System for Web-Based Attacks. 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :227—230.

Various studies have been performed to explore the feasibility of detection of web-based attacks by machine learning techniques. False-positive and false-negative results have been reported as a major issue to be addressed to make machine learning-based detection and prevention of web-based attacks reliable and trustworthy. In our research, we tried to identify and address the root cause of the false-positive and false-negative results. In our experiment, we used the CSIC 2010 HTTP dataset, which contains the generated traffic targeted to an e-commerce web application. Our experimental results demonstrate that applying the proposed fine-tuned feature set extraction results in improved detection and classification of web-based attacks for all tested machine learning algorithms. The performance of the machine learning algorithm in the detection of attacks was evaluated by the Precision, Recall, Accuracy, and F-measure metrics. Among three tested algorithms, the J48 decision tree algorithm provided the highest True Positive rate, Precision, and Recall.

Tahsini, A., Dunstatter, N., Guirguis, M., Ahmed, C. M..  2020.  DeepBLOC: A Framework for Securing CPS through Deep Reinforcement Learning on Stochastic Games. 2020 IEEE Conference on Communications and Network Security (CNS). :1–9.
One important aspect in protecting Cyber Physical System (CPS) is ensuring that the proper control and measurement signals are propagated within the control loop. The CPS research community has been developing a large set of check blocks that can be integrated within the control loop to check signals against various types of attacks (e.g., false data injection attacks). Unfortunately, it is not possible to integrate all these “checks” within the control loop as the overhead introduced when checking signals may violate the delay constraints of the control loop. Moreover, these blocks do not completely operate in isolation of each other as dependencies exist among them in terms of their effectiveness against detecting a subset of attacks. Thus, it becomes a challenging and complex problem to assign the proper checks, especially with the presence of a rational adversary who can observe the check blocks assigned and optimizes her own attack strategies accordingly. This paper tackles the inherent state-action space explosion that arises in securing CPS through developing DeepBLOC (DB)-a framework in which Deep Reinforcement Learning algorithms are utilized to provide optimal/sub-optimal assignments of check blocks to signals. The framework models stochastic games between the adversary and the CPS defender and derives mixed strategies for assigning check blocks to ensure the integrity of the propagated signals while abiding to the real-time constraints dictated by the control loop. Through extensive simulation experiments and a real implementation on a water purification system, we show that DB achieves assignment strategies that outperform other strategies and heuristics.
Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., Shafique, M..  2020.  Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.
2021-04-09
Chytas, S. P., Maglaras, L., Derhab, A., Stamoulis, G..  2020.  Assessment of Machine Learning Techniques for Building an Efficient IDS. 2020 First International Conference of Smart Systems and Emerging Technologies (SMARTTECH). :165—170.
Intrusion Detection Systems (IDS) are the systems that detect and block any potential threats (e.g. DDoS attacks) in the network. In this project, we explore the performance of several machine learning techniques when used as parts of an IDS. We experiment with the CICIDS2017 dataset, one of the biggest and most complete IDS datasets in terms of having a realistic background traffic and incorporating a variety of cyber attacks. The techniques we present are applicable to any IDS dataset and can be used as a basis for deploying a real time IDS in complex environments.
Mishra, A., Yadav, P..  2020.  Anomaly-based IDS to Detect Attack Using Various Artificial Intelligence Machine Learning Algorithms: A Review. 2nd International Conference on Data, Engineering and Applications (IDEA). :1—7.
Cyber-attacks are becoming more complex & increasing tasks in accurate intrusion detection (ID). Failure to avoid intrusion can reduce the reliability of security services, for example, integrity, Privacy & availability of data. The rapid proliferation of computer networks (CNs) has reformed the perception of network security. Easily accessible circumstances affect computer networks from many threats by hackers. Threats to a network are many & hypothetically devastating. Researchers have recognized an Intrusion Detection System (IDS) up to identifying attacks into a wide variety of environments. Several approaches to intrusion detection, usually identified as Signature-based Intrusion Detection Systems (SIDS) & Anomaly-based Intrusion Detection Systems (AIDS), were proposed in the literature to address computer safety hazards. This survey paper grants a review of current IDS, complete analysis of prominent new works & generally utilized dataset to evaluation determinations. It also introduces avoidance techniques utilized by attackers to avoid detection. This paper delivers a description of AIDS for attack detection. IDS is an applied research area in artificial intelligence (AI) that uses multiple machine learning algorithms.
Lin, T., Shi, Y., Shu, N., Cheng, D., Hong, X., Song, J., Gwee, B. H..  2020.  Deep Learning-Based Image Analysis Framework for Hardware Assurance of Digital Integrated Circuits. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1—6.
We propose an Artificial Intelligence (AI)/Deep Learning (DL)-based image analysis framework for hardware assurance of digital integrated circuits (ICs). Our aim is to examine and verify various hardware information from analyzing the Scanning Electron Microscope (SEM) images of an IC. In our proposed framework, we apply DL-based methods at all essential steps of the analysis. To the best of our knowledge, this is the first such framework that makes heavy use of DL-based methods at all essential analysis steps. Further, to reduce time and effort required in model re-training, we propose and demonstrate various automated or semi-automated training data preparation methods and demonstrate the effectiveness of using synthetic data to train a model. By applying our proposed framework to analyzing a set of SEM images of a large digital IC, we prove its efficacy. Our DL-based methods are fast, accurate, robust against noise, and can automate tasks that were previously performed mainly manually. Overall, we show that DL-based methods can largely increase the level of automation in hardware assurance of digital ICs and improve its accuracy.
2021-04-08
Rhee, K. H..  2020.  Composition of Visual Feature Vector Pattern for Deep Learning in Image Forensics. IEEE Access. 8:188970—188980.

In image forensics, to determine whether the image is impurely transformed, it extracts and examines the features included in the suspicious image. In general, the features extracted for the detection of forgery images are based on numerical values, so it is somewhat unreasonable to use in the CNN structure for image classification. In this paper, the extraction method of a feature vector is using a least-squares solution. Treat a suspicious image like a matrix and its solution to be coefficients as the feature vector. Get two solutions from two images of the original and its median filter residual (MFR). Subsequently, the two features were formed into a visualized pattern and then fed into CNN deep learning to classify the various transformed images. A new structure of the CNN net layer was also designed by hybrid with the inception module and the residual block to classify visualized feature vector patterns. The performance of the proposed image forensics detection (IFD) scheme was measured with the seven transformed types of image: average filtered (window size: 3 × 3), gaussian filtered (window size: 3 × 3), JPEG compressed (quality factor: 90, 70), median filtered (window size: 3 × 3, 5 × 5), and unaltered. The visualized patterns are fed into the image input layer of the designed CNN hybrid model. Throughout the experiment, the accuracy of median filtering detection was 98% over. Also, the area under the curve (AUC) by sensitivity (TP: true positive rate) and 1-specificity (FP: false positive rate) results of the proposed IFD scheme approached to `1' on the designed CNN hybrid model. Experimental results show high efficiency and performance to classify the various transformed images. Therefore, the grade evaluation of the proposed scheme is “Excellent (A)”.

Sarma, M. S., Srinivas, Y., Abhiram, M., Ullala, L., Prasanthi, M. S., Rao, J. R..  2017.  Insider Threat Detection with Face Recognition and KNN User Classification. 2017 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM). :39—44.
Information Security in cloud storage is a key trepidation with regards to Degree of Trust and Cloud Penetration. Cloud user community needs to ascertain performance and security via QoS. Numerous models have been proposed [2] [3] [6][7] to deal with security concerns. Detection and prevention of insider threats are concerns that also need to be tackled. Since the attacker is aware of sensitive information, threats due to cloud insider is a grave concern. In this paper, we have proposed an authentication mechanism, which performs authentication based on verifying facial features of the cloud user, in addition to username and password, thereby acting as two factor authentication. New QoS has been proposed which is capable of monitoring and detection of insider threats using Machine Learning Techniques. KNN Classification Algorithm has been used to classify users into legitimate, possibly legitimate, possibly not legitimate and not legitimate groups to verify image authenticity to conclude, whether there is any possible insider threat. A threat detection model has also been proposed for insider threats, which utilizes Facial recognition and Monitoring models. Security Method put forth in [6] [7] is honed to include threat detection QoS to earn higher degree of trust from cloud user community. As a recommendation, Threat detection module should be harnessed in private cloud deployments like Defense and Pharma applications. Experimentation has been conducted using open source Machine Learning libraries and results have been attached in this paper.