Visible to the public Biblio

Found 703 results

Filters: Keyword is machine learning  [Clear All Filters]
2022-01-11
McCarthy, Andrew, Andriotis, Panagiotis, Ghadafi, Essam, Legg, Phil.  2021.  Feature Vulnerability and Robustness Assessment against Adversarial Machine Learning Attacks. 2021 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1–8.
Whilst machine learning has been widely adopted for various domains, it is important to consider how such techniques may be susceptible to malicious users through adversarial attacks. Given a trained classifier, a malicious attack may attempt to craft a data observation whereby the data features purposefully trigger the classifier to yield incorrect responses. This has been observed in various image classification tasks, including falsifying road sign detection and facial recognition, which could have severe consequences in real-world deployment. In this work, we investigate how these attacks could impact on network traffic analysis, and how a system could perform misclassification of common network attacks such as DDoS attacks. Using the CICIDS2017 data, we examine how vulnerable the data features used for intrusion detection are to perturbation attacks using FGSM adversarial examples. As a result, our method provides a defensive approach for assessing feature robustness that seeks to balance between classification accuracy whilst minimising the attack surface of the feature space.
2022-01-10
Al-Ameer, Ali, AL-Sunni, Fouad.  2021.  A Methodology for Securities and Cryptocurrency Trading Using Exploratory Data Analysis and Artificial Intelligence. 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA). :54–61.
This paper discusses securities and cryptocurrency trading using artificial intelligence (AI) in the sense that it focuses on performing Exploratory Data Analysis (EDA) on selected technical indicators before proceeding to modelling, and then to develop more practical models by introducing new reward loss function that maximizes the returns during training phase. The results of EDA reveal that the complex patterns within the data can be better captured by discriminative classification models and this was endorsed by performing back-testing on two securities using Artificial Neural Network (ANN) and Random Forests (RF) as discriminative models against their counterpart Na\"ıve Bayes as a generative model. To enhance the learning process, the new reward loss function is utilized to retrain the ANN with testing on AAPL, IBM, BRENT CRUDE and BTC using auto-trading strategy that serves as the intelligent unit, and the results indicate this loss superiorly outperforms the conventional cross-entropy used in predictive models. The overall results of this work suggest that there should be larger focus on EDA and more practical losses in the research of machine learning modelling for stock market prediction applications.
Kalinin, Maxim O., Krundyshev, Vasiliy M..  2021.  Computational Intelligence Technologies Stack for Protecting the Critical Digital Infrastructures against Security Intrusions. 2021 Fifth World Conference on Smart Trends in Systems Security and Sustainability (WorldS4). :118–122.
Over the past decade, an infotelecommunication technology has made significant strides forward. With the advent of new generation wireless networks and the massive digitalization of industries, the object of protection has changed. The digital transformation has led to an increased opportunity for cybercriminals. The ability of computational intelligence to quickly process large amounts of data makes the intrusions tailored to specific environments. Polymorphic attacks that have mutations in their sequences of acts adapt to the communication environments, operating systems and service frameworks, and also try to deceive the defense tools. The poor protection of most Internet of Things devices allows the attackers to take control over them creating the megabotnets. In this regard, traditional methods of network protection become rigid and low-effective. The paper reviews a computational intelligence (CI) enabled software- defined network (SDN) for the network management, providing dynamic network reconfiguration to improve network performance and security control. Advanced machine learning and artificial neural networks are promising in detection of false data injections. Bioinformatics methods make it possible to detect polymorphic attacks. Swarm intelligence detects dynamic routing anomalies. Quantum machine learning is effective at processing the large volumes of security-relevant datasets. The CI technology stack provides a comprehensive protection against a variative cyberthreats scope.
Takey, Yuvraj Sanjayrao, Tatikayala, Sai Gopal, Samavedam, Satyanadha Sarma, Lakshmi Eswari, P R, Patil, Mahesh Uttam.  2021.  Real Time early Multi Stage Attack Detection. 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS). 1:283–290.
In recent times, attackers are continuously developing advanced techniques for evading security, stealing personal financial data, Intellectual Property (IP) and sensitive information. These attacks often employ multiple attack vectors for gaining initial access to the systems. Analysts are often challenged to identify malware objective, initial attack vectors, attack propagation, evading techniques, protective mechanisms and unseen techniques. Most of these attacks are frequently referred to as Multi stage attacks and pose a grave threat to organizations, individuals and the government. Early multistage attack detection is a crucial measure to counter malware and deactivate it. Most traditional security solutions use signature-based detection, which frequently fails to thwart zero-day attacks. Manual analysis of these samples requires enormous effort for effectively counter exponential growth of malware samples. In this paper, we present a novel approach leveraging Machine Learning and MITRE Adversary Tactic Technique and Common knowledge (ATT&CK) framework for early multistage attack detection in real time. Firstly, we have developed a run-time engine that receives notification while malicious executable is downloaded via browser or a launch of a new process in the system. Upon notification, the engine extracts the features from static executable for learning if the executable is malicious. Secondly, we use the MITRE ATT&CK framework, evolved based on the real-world observations of the cyber attacks, that best describes the multistage attack with respect to the adversary Tactics, Techniques and Procedure (TTP) for detecting the malicious executable as well as predict the stages that the malware executes during the attack. Lastly, we propose a real-time system that combines both these techniques for early multistage attack detection. The proposed model has been tested on 6000 unpacked malware samples and it achieves 98 % accuracy. The other major contribution in this paper is identifying the Windows API calls for each of the adversary techniques based on the MITRE ATT&CK.
Acharya, Abiral, Oluoch, Jared.  2021.  A Dual Approach for Preventing Blackhole Attacks in Vehicular Ad Hoc Networks Using Statistical Techniques and Supervised Machine Learning. 2021 IEEE International Conference on Electro Information Technology (EIT). :230–235.
Vehicular Ad Hoc Networks (VANETs) have the potential to improve road safety and reduce traffic congestion by enhancing sharing of messages about road conditions. Communication in VANETs depends upon a Public Key Infrastructure (PKI) that checks for message confidentiality, integrity, and authentication. One challenge that the PKI infrastructure does not eliminate is the possibility of malicious vehicles mounting a Distributed Denial of Service (DDoS) attack. We present a scheme that combines statistical modeling and machine learning techniques to detect and prevent blackhole attacks in a VANET environment.Simulation results demonstrate that on average, our model produces an Area Under The Curve (ROC) and Receiver Operating Characteristics (AUC) score of 96.78% which is much higher than a no skill ROC AUC score and only 3.22% away from an ideal ROC AUC score. Considering all the performance metrics, we show that the Support Vector Machine (SVM) and Gradient Boosting classifier are more accurate and perform consistently better under various circumstances. Both have an accuracy of over 98%, F1-scores of over 95%, and ROC AUC scores of over 97%. Our scheme is robust and accurate as evidenced by its ability to identify and prevent blackhole attacks. Moreover, the scheme is scalable in that addition of vehicles to the network does not compromise its accuracy and robustness.
Freas, Christopher B., Shah, Dhara, Harrison, Robert W..  2021.  Accuracy and Generalization of Deep Learning Applied to Large Scale Attacks. 2021 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
Distributed denial of service attacks threaten the security and health of the Internet. Remediation relies on up-to-date and accurate attack signatures. Signature-based detection is relatively inexpensive computationally. Yet, signatures are inflexible when small variations exist in the attack vector. Attackers exploit this rigidity by altering their attacks to bypass the signatures. Our previous work revealed a critical problem with conventional machine learning models. Conventional models are unable to generalize on the temporal nature of network flow data to classify attacks. We thus explored the use of deep learning techniques on real flow data. We found that a variety of attacks could be identified with high accuracy compared to previous approaches. We show that a convolutional neural network can be implemented for this problem that is suitable for large volumes of data while maintaining useful levels of accuracy.
Ugwu, Chukwuemeka Christian, Obe, Olumide Olayinka, Popoọla, Olugbemiga Solomon, Adetunmbi, Adebayo Olusọla.  2021.  A Distributed Denial of Service Attack Detection System using Long Short Term Memory with Singular Value Decomposition. 2020 IEEE 2nd International Conference on Cyberspac (CYBER NIGERIA). :112–118.
The increase in online activity during the COVID 19 pandemic has generated a surge in network traffic capable of expanding the scope of DDoS attacks. Cyber criminals can now afford to launch massive DDoS attacks capable of degrading the performances of conventional machine learning based IDS models. Hence, there is an urgent need for an effective DDoS attack detective model with the capacity to handle large magnitude of DDoS attack traffic. This study proposes a deep learning based DDoS attack detection system using Long Short Term Memory (LSTM). The proposed model was evaluated on UNSW-NB15 and NSL-KDD intrusion datasets, whereby twenty-three (23) and twenty (20) attack features were extracted from UNSW-NB15 and NSL-KDD, respectively using Singular Value Decomposition (SVD). The results from the proposed model show significant improvement when compared with results from some conventional machine learning techniques such as Naïve Bayes (NB), Decision Tree (DT), and Support Vector Machine (SVM) with accuracies of 94.28% and 90.59% on both datasets, respectively. Furthermore, comparative analysis of LSTM with other deep learning results reported in literature justified the choice of LSTM among its deep learning peers in detecting DDoS attacks over a network.
Allagi, Shridhar, Rachh, Rashmi, Anami, Basavaraj.  2021.  A Robust Support Vector Machine Based Auto-Encoder for DoS Attacks Identification in Computer Networks. 2021 International Conference on Intelligent Technologies (CONIT). :1–6.
An unprecedented upsurge in the number of cyberattacks and threats is the corollary of ubiquitous internet connectivity. Among a variety of threats and attacks, Denial of Service (DoS) attacks are crucial and conventional mechanisms currently being used for detection/ identification of these attacks are not adequate. The use of real-time and robust mechanisms is the way to handle this. Machine learning-based techniques have been extensively used for this in the recent past. In this paper, a robust mechanism using Support Vector Machine Based Auto-Encoder is proposed for identifying DoS attacks. The proposed technique is tested on the CICIDS dataset and has given 99.32 % accuracy for DoS attacks. To study the effect of the number of features on the performance of the technique, a discriminant component analysis is deployed for feature reduction and independent experiments, namely SVM with 25 features, SVM with 30 features, SVM with 35 features, and PCA-SVM with 25 features, are conducted. From the experiments, it is observed that AE-SVM has performed better than others.
Sudar, K.Muthamil, Beulah, M., Deepalakshmi, P., Nagaraj, P., Chinnasamy, P..  2021.  Detection of Distributed Denial of Service Attacks in SDN using Machine learning techniques. 2021 International Conference on Computer Communication and Informatics (ICCCI). :1–5.
Software-defined network (SDN) is a network architecture that used to build, design the hardware components virtually. We can dynamically change the settings of network connections. In the traditional network, it's not possible to change dynamically, because it's a fixed connection. SDN is a good approach but still is vulnerable to DDoS attacks. The DDoS attack is menacing to the internet. To prevent the DDoS attack, the machine learning algorithm can be used. The DDoS attack is the multiple collaborated systems that are used to target the particular server at the same time. In SDN control layer is in the center that link with the application and infrastructure layer, where the devices in the infrastructure layer controlled by the software. In this paper, we propose a machine learning technique namely Decision Tree and Support Vector Machine (SVM) to detect malicious traffic. Our test outcome shows that the Decision Tree and Support Vector Machine (SVM) algorithm provides better accuracy and detection rate.
Gong, Jianhu.  2021.  Network Information Security Pipeline Based on Grey Relational Cluster and Neural Networks. 2021 5th International Conference on Computing Methodologies and Communication (ICCMC). :971–975.
Network information security pipeline based on the grey relational cluster and neural networks is designed and implemented in this paper. This method is based on the principle that the optimal selected feature set must contain the feature with the highest information entropy gain to the data set category. First, the feature with the largest information gain is selected from all features as the search starting point, and then the sample data set classification mark is fully considered. For the better performance, the neural networks are considered. The network learning ability is directly determined by its complexity. The learning of general complex problems and large sample data will bring about a core dramatic increase in network scale. The proposed model is validated through the simulation.
Moonamaldeniya, Menaka, Priyashantha, V.R.S.C., Gunathilake, M.B.N.B., Ransinghe, Y.M.P.B., Ratnayake, A.L.S.D., Abeygunawardhana, Pradeep K.W..  2021.  Prevent Data Exfiltration on Smart Phones Using Audio Distortion and Machine Learning. 2021 Moratuwa Engineering Research Conference (MERCon). :345–350.
Attacks on mobile devices have gained a significant amount of attention lately. This is because more and more individuals are switching to smartphones from traditional non-smartphones. Therefore, attackers or cybercriminals are now getting on the bandwagon to have an opportunity at obtaining information stored on smartphones. In this paper, we present an Android mobile application that will aid to minimize data exfiltration from attacks, such as, Acoustic Side-Channel Attack, Clipboard Jacking, Permission Misuse and Malicious Apps. This paper will commence its inception with an introduction explaining the current issues in general and how attacks such as side-channel attacks and clipboard jacking paved the way for data exfiltration. We will also discuss a few already existing solutions that try to mitigate these problems. Moving on to the methodology we will emphasize how we came about the solution and what methods we followed to achieve the end goal of securing the smartphone. In the final section, we will discuss the outcomes of the project and conclude what needs to be done in the future to enhance this project so that this mobile application will continue to keep the user's data safe from the criminals' grasps.
2021-12-22
Guerdan, Luke, Raymond, Alex, Gunes, Hatice.  2021.  Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). :3789–3798.
As machine learning approaches are increasingly used to augment human decision-making, eXplainable Artificial Intelligence (XAI) research has explored methods for communicating system behavior to humans. However, these approaches often fail to account for the affective responses of humans as they interact with explanations. Facial affect analysis, which examines human facial expressions of emotions, is one promising lens for understanding how users engage with explanations. Therefore, in this work, we aim to (1) identify which facial affect features are pronounced when people interact with XAI interfaces, and (2) develop a multitask feature embedding for linking facial affect signals with participants' use of explanations. Our analyses and results show that the occurrence and values of facial AU1 and AU4, and Arousal are heightened when participants fail to use explanations effectively. This suggests that facial affect analysis should be incorporated into XAI to personalize explanations to individuals' interaction styles and to adapt explanations based on the difficulty of the task performed.
2021-12-21
Xu, Xiaojun, Wang, Qi, Li, Huichen, Borisov, Nikita, Gunter, Carl A., Li, Bo.  2021.  Detecting AI Trojans Using Meta Neural Analysis. 2021 IEEE Symposium on Security and Privacy (SP). :103–120.
In machine learning Trojan attacks, an adversary trains a corrupted model that obtains good performance on normal data but behaves maliciously on data samples with certain trigger patterns. Several approaches have been proposed to detect such attacks, but they make undesirable assumptions about the attack strategies or require direct access to the trained models, which restricts their utility in practice.This paper addresses these challenges by introducing a Meta Neural Trojan Detection (MNTD) pipeline that does not make assumptions on the attack strategies and only needs black-box access to models. The strategy is to train a meta-classifier that predicts whether a given target model is Trojaned. To train the meta-model without knowledge of the attack strategy, we introduce a technique called jumbo learning that samples a set of Trojaned models following a general distribution. We then dynamically optimize a query set together with the meta-classifier to distinguish between Trojaned and benign models.We evaluate MNTD with experiments on vision, speech, tabular data and natural language text datasets, and against different Trojan attacks such as data poisoning attack, model manipulation attack, and latent attack. We show that MNTD achieves 97% detection AUC score and significantly outperforms existing detection approaches. In addition, MNTD generalizes well and achieves high detection performance against unforeseen attacks. We also propose a robust MNTD pipeline which achieves around 90% detection AUC even when the attacker aims to evade the detection with full knowledge of the system.
He, Zhangying, Miari, Tahereh, Makrani, Hosein Mohammadi, Aliasgari, Mehrdad, Homayoun, Houman, Sayadi, Hossein.  2021.  When Machine Learning Meets Hardware Cybersecurity: Delving into Accurate Zero-Day Malware Detection. 2021 22nd International Symposium on Quality Electronic Design (ISQED). :85–90.
Cybersecurity for the past decades has been in the front line of global attention as a critical threat to the information technology infrastructures. According to recent security reports, malicious software (a.k.a. malware) is rising at an alarming rate in numbers as well as harmful purposes to compromise security of computing systems. To address the high complexity and computational overheads of conventional software-based detection techniques, Hardware-Supported Malware Detection (HMD) has proved to be efficient for detecting malware at the processors' microarchitecture level with the aid of Machine Learning (ML) techniques applied on Hardware Performance Counter (HPC) data. Existing ML-based HMDs while accurate in recognizing known signatures of malicious patterns, have not explored detecting unknown (zero-day) malware data at run-time which is a more challenging problem, since its HPC data does not match any known attack applications' signatures in the existing database. In this work, we first present a review of recent ML-based HMDs utilizing built-in HPC registers information. Next, we examine the suitability of various standard ML classifiers for zero-day malware detection and demonstrate that such methods are not capable of detecting unknown malware signatures with high detection rate. Lastly, to address the challenge of run-time zero-day malware detection, we propose an ensemble learning-based technique to enhance the performance of the standard malware detectors despite using a small number of microarchitectural features that are captured at run-time by existing HPCs. The experimental results demonstrate that our proposed approach by applying AdaBoost ensemble learning on Random Forrest classifier as a regular classifier achieves 92% F-measure and 95% TPR with only 2% false positive rate in detecting zero-day malware using only the top 4 microarchitectural features.
2021-12-20
Mygdalis, Vasileios, Tefas, Anastasios, Pitas, Ioannis.  2021.  Introducing K-Anonymity Principles to Adversarial Attacks for Privacy Protection in Image Classification Problems. 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.
The network output activation values for a given input can be employed to produce a sorted ranking. Adversarial attacks typically generate the least amount of perturbation required to change the classifier label. In that sense, generated adversarial attack perturbation only affects the output in the 1st sorted ranking position. We argue that meaningful information about the adversarial examples i.e., their original labels, is still encoded in the network output ranking and could potentially be extracted, using rule-based reasoning. To this end, we introduce a novel adversarial attack methodology inspired by the K-anonymity principles, that generates adversarial examples that are not only misclassified, but their output sorted ranking spreads uniformly along K different positions. Any additional perturbation arising from the strength of the proposed objectives, is regularized by a visual similarity-based term. Experimental results denote that the proposed approach achieves the optimization goals inspired by K-anonymity with reduced perturbation as well.
Ebrahimabadi, Mohammad, Younis, Mohamed, Lalouani, Wassila, Karimi, Naghmeh.  2021.  A Novel Modeling-Attack Resilient Arbiter-PUF Design. 2021 34th International Conference on VLSI Design and 2021 20th International Conference on Embedded Systems (VLSID). :123–128.
Physically Unclonable Functions (PUFs) have been considered as promising lightweight primitives for random number generation and device authentication. Thanks to the imperfections occurring during the fabrication process of integrated circuits, each PUF generates a unique signature which can be used for chip identification. Although supposed to be unclonable, PUFs have been shown to be vulnerable to modeling attacks where a set of collected challenge response pairs are used for training a machine learning model to predict the PUF response to unseen challenges. Challenge obfuscation has been proposed to tackle the modeling attacks in recent years. However, knowing the obfuscation algorithm can help the adversary to model the PUF. This paper proposes a modeling-resilient arbiter-PUF architecture that benefits from the randomness provided by PUFs in concealing the obfuscation scheme. The experimental results confirm the effectiveness of the proposed structure in countering PUF modeling attacks.
Sahay, Rajeev, Brinton, Christopher G., Love, David J..  2021.  Frequency-based Automated Modulation Classification in the Presence of Adversaries. ICC 2021 - IEEE International Conference on Communications. :1–6.
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals. Recent work has demonstrated the ability of deep learning to achieve robust AMC performance using raw in-phase and quadrature (IQ) time samples. Yet, deep learning models are highly susceptible to adversarial interference, which cause intelligent prediction models to misclassify received samples with high confidence. Furthermore, adversarial interference is often transferable, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification network. In this work, we present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference. Specifically, we show that adversarial attacks crafted to fool models trained on time-domain features are not easily transferable to models trained using frequency-domain features. In this capacity, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs). We further demonstrate our frequency feature-based classification models to achieve accuracies greater than 99% in the absence of attacks.
Kriaa, Siwar, Chaabane, Yahia.  2021.  SecKG: Leveraging attack detection and prediction using knowledge graphs. 2021 12th International Conference on Information and Communication Systems (ICICS). :112–119.
Advanced persistent threats targeting sensitive corporations, are becoming today stealthier and more complex, coordinating different attacks steps and lateral movements, and trying to stay undetected for long time. Classical security solutions that rely on signature-based detection can be easily thwarted by malware using obfuscation and encryption techniques. More recent solutions are using machine learning approaches for detecting outliers. Nevertheless, the majority of them reason on tabular unstructured data which can lead to missing obvious conclusions. We propose in this paper a novel approach that leverages a combination of both knowledge graphs and machine learning techniques to detect and predict attacks. Using Cyber Threat Intelligence (CTI), we built a knowledge graph that processes event logs in order to not only detect attack techniques, but also learn how to predict them.
Baye, Gaspard, Hussain, Fatima, Oracevic, Alma, Hussain, Rasheed, Ahsan Kazmi, S.M..  2021.  API Security in Large Enterprises: Leveraging Machine Learning for Anomaly Detection. 2021 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.
Large enterprises offer thousands of micro-services applications to support their daily business activities by using Application Programming Interfaces (APIs). These applications generate huge amounts of traffic via millions of API calls every day, which is difficult to analyze for detecting any potential abnormal behaviour and application outage. This phenomenon makes Machine Learning (ML) a natural choice to leverage and analyze the API traffic and obtain intelligent predictions. This paper proposes an ML-based technique to detect and classify API traffic based on specific features like bandwidth and number of requests per token. We employ a Support Vector Machine (SVM) as a binary classifier to classify the abnormal API traffic using its linear kernel. Due to the scarcity of the API dataset, we created a synthetic dataset inspired by the real-world API dataset. Then we used the Gaussian distribution outlier detection technique to create a training labeled dataset simulating real-world API logs data which we used to train the SVM classifier. Furthermore, to find a trade-off between accuracy and false positives, we aim at finding the optimal value of the error term (C) of the classifier. The proposed anomaly detection method can be used in a plug and play manner, and fits into the existing micro-service architecture with little adjustments in order to provide accurate results in a fast and reliable way. Our results demonstrate that the proposed method achieves an F1-score of 0.964 in detecting anomalies in API traffic with a 7.3% of false positives rate.
Sun, Jingxue, Huang, Zhiqiu, Yang, Ting, Wang, Wengjie, Zhang, Yuqing.  2021.  A System for Detecting Third-Party Tracking through the Combination of Dynamic Analysis and Static Analysis. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
With the continuous development of Internet technology, people pay more and more attention to private security. In particular, third-party tracking is a major factor affecting privacy security. So far, the most effective way to prevent third-party tracking is to create a blacklist. However, blacklist generation and maintenance need to be carried out manually which is inefficient and difficult to maintain. In order to generate blacklists more quickly and accurately in this era of big data, this paper proposes a machine learning system MFTrackerDetector against third-party tracking. The system is based on the theory of structural hole and only detects third-party trackers. The system consists of two subsystems, DMTrackerDetector and DFTrackerDetector. DMTrackerDetector is a JavaScript-based subsystem and DFTrackerDetector is a Flash-based subsystem. Because tracking code and non-tracking code often call different APIs, DMTrackerDetector builds a classifier using all the APIs in JavaScript as features and extracts the API features in JavaScript through dynamic analysis. Unlike static analysis method, the dynamic analysis method can effectively avoid code obfuscation. DMTrackerDetector eventually generates a JavaScript-based third-party tracker list named Jlist. DFTrackerDetector constructs a classifier using all the APIs in ActionScript as features and extracts the API features in the flash script through static analysis. DFTrackerDetector finally generates a Flash-based third-party tracker list named Flist. DFTrackerDetector achieved 92.98% accuracy in the Flash test set and DMTrackerDetector achieved 90.79% accuracy in the JavaScript test set. MFTrackerDetector eventually generates a list of third-party trackers, which is a combination of Jlist and Flist.
2021-12-02
Rao, Poojith U., Sodhi, Balwinder, Sodhi, Ranjana.  2020.  Cyber Security Enhancement of Smart Grids Via Machine Learning - A Review. 2020 21st National Power Systems Conference (NPSC). :1–6.
The evolution of power system as a smart grid (SG) not only has enhanced the monitoring and control capabilities of the power grid, but also raised its security concerns and vulnerabilities. With a boom in Internet of Things (IoT), a lot a sensors are being deployed across the grid. This has resulted in huge amount of data available for processing and analysis. Machine learning (ML) and deep learning (DL) algorithms are being widely used to extract useful information from this data. In this context, this paper presents a comprehensive literature survey of different ML and DL techniques that have been used in the smart grid cyber security area. The survey summarizes different type of cyber threats which today's SGs are prone to, followed by various ML and DL-assisted defense strategies. The effectiveness of the ML based methods in enhancing the cyber security of SGs is also demonstrated with the help of a case study.
2021-11-30
Cultice, Tyler, Ionel, Dan, Thapliyal, Himanshu.  2020.  Smart Home Sensor Anomaly Detection Using Convolutional Autoencoder Neural Network. 2020 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS). :67–70.
We propose an autoencoder based approach to anomaly detection in smart grid systems. Data collecting sensors within smart home systems are susceptible to many data corruption issues, such as malicious attacks or physical malfunctions. By applying machine learning to a smart home or grid, sensor anomalies can be detected automatically for secure data collection and sensor-based system functionality. In addition, we tested the effectiveness of this approach on real smart home sensor data collected for multiple years. An early detection of such data corruption issues is essential to the security and functionality of the various sensors and devices within a smart home.
2021-11-29
Yin, Yifei, Zulkernine, Farhana, Dahan, Samuel.  2020.  Determining Worker Type from Legal Text Data Using Machine Learning. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :444–450.
This project addresses a classic employment law question in Canada and elsewhere using machine learning approach: how do we know whether a worker is an employee or an independent contractor? This is a central issue for self-represented litigants insofar as these two legal categories entail very different rights and employment protections. In this interdisciplinary research study, we collaborated with the Conflict Analytics Lab to develop machine learning models aimed at determining whether a worker is an employee or an independent contractor. We present a number of supervised learning models including a neural network model that we implemented using data labeled by law researchers and compared the accuracy of the models. Our neural network model achieved an accuracy rate of 91.5%. A critical discussion follows to identify the key features in the data that influence the accuracy of our models and provide insights about the case outcomes.
Jamieson, Laura, Moreno-Garcia, Carlos Francisco, Elyan, Eyad.  2020.  Deep Learning for Text Detection and Recognition in Complex Engineering Diagrams. 2020 International Joint Conference on Neural Networks (IJCNN). :1–7.
Engineering drawings such as Piping and Instrumentation Diagrams contain a vast amount of text data which is essential to identify shapes, pipeline activities, tags, amongst others. These diagrams are often stored in undigitised format, such as paper copy, meaning the information contained within the diagrams is not readily accessible to inspect and use for further data analytics. In this paper, we make use of the benefits of recent deep learning advances by selecting models for both text detection and text recognition, and apply them to the digitisation of text from within real world complex engineering diagrams. Results show that 90% of text strings were detected including vertical text strings, however certain non text diagram elements were detected as text. Text strings were obtained by the text recognition method for 86% of detected text instances. The findings show that whilst the chosen Deep Learning methods were able to detect and recognise text which occurred in simple scenarios, more complex representations of text including those text strings located in close proximity to other drawing elements were highlighted as a remaining challenge.
Takemoto, Shu, Shibagaki, Kazuya, Nozaki, Yusuke, Yoshikawa, Masaya.  2020.  Deep Learning Based Attack for AI Oriented Authentication Module. 2020 35th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC). :5–8.
Neural Network Physical Unclonable Function (NN-PUF) has been proposed for the secure implementation of Edge AI. This study evaluates the tamper resistance of NN-PUF against machine learning attacks. The machine learning attack in this study learns CPRs using deep learning. As a result of the evaluation experiment, the machine learning attack predicted about 82% for CRPs. Therefore, this study revealed that NN-PUF is vulnerable to machine learning attacks.