Visible to the public Biblio

Found 623 results

Filters: First Letter Of Title is M  [Clear All Filters]
A B C D E F G H I J K L [M] N O P Q R S T U V W X Y Z   [Show ALL]
M
Severi, S., Sottile, F., Abreu, G., Pastrone, C., Spirito, M., Berens, F..  2014.  M2M technologies: Enablers for a pervasive Internet of Things. Networks and Communications (EuCNC), 2014 European Conference on. :1-5.

We survey the state-of-the-art on the Internet-of-Things (IoT) from a wireless communications point of view, as a result of the European FP7 project BUTLER which has its focus on pervasiveness, context-awareness and security for IoT. In particular, we describe the efforts to develop so-called (wireless) enabling technologies, aimed at circumventing the many challenges involved in extending the current set of domains (“verticals”) of IoT applications towards a “horizontal” (i.e. integrated) vision of the IoT. We start by illustrating current research effort in machine-to-machine (M2M), which is mainly focused on vertical domains, and we discuss some of them in details, depicting then the necessary horizontal vision for the future intelligent daily routine (“Smart Life”). We then describe the technical features of the most relevant heterogeneous communications technologies on which the IoT relies, under the light of the on-going M2M service layer standardization. Finally we identify and present the key aspects, within three major cross-vertical categories, under which M2M technologies can function as enablers for the horizontal vision of the IoT.

Asmussen, Nils, Völp, Marcus, Nöthen, Benedikt, Härtig, Hermann, Fettweis, Gerhard.  2016.  M3: A Hardware/Operating-System Co-Design to Tame Heterogeneous Manycores. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :189–203.

In the last decade, the number of available cores increased and heterogeneity grew. In this work, we ask the question whether the design of the current operating systems (OSes) is still appropriate if these trends continue and lead to abundantly available but heterogeneous cores, or whether it forces a fundamental rethinking of how systems are designed. We argue that: 1. hiding heterogeneity behind a common hardware interface unifies, to a large extent, the control and coordination of cores and accelerators in the OS, 2. isolating at the network-on-chip rather than with processor features (like privileged mode, memory management unit, ...), allows running untrusted code on arbitrary cores, and 3. providing OS services via protocols over the network-on-chip, instead of via system calls, makes them accessible to arbitrary types of cores as well. In summary, this turns accelerators into first-class citizens and enables a single and convenient programming environment for all cores without the need to trust any application. In this paper, we introduce network-on-chip-level isolation, present the design of our microkernel-based OS, M3, and the common hardware interface, and evaluate the performance of our prototype in comparison to Linux. A bit surprising, without using accelerators, M3 outperforms Linux in some application-level benchmarks by more than a factor of five.

Makarim, Rusydi H., Stevens, Marc.  2017.  M4GB: An Efficient Gröbner-Basis Algorithm. Proceedings of the 2017 ACM on International Symposium on Symbolic and Algebraic Computation. :293–300.

This paper introduces a new efficient algorithm for computing Grobner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction step; and how to exploit efficient linear algebra for the reduction step. In comparison to F4 it removes further redundant work in the processing of reducible monomials. Furthermore, instead of translating the reduction of many critical pairs into the row reduction of some large matrix, our algorithm is described more natively and is efficient while processing critical pairs one by one. This feature implies that typically M4GB has to process fewer critical pairs than F4, and reduces the time and data complexity 'staircase' related to the increasing degree of regularity for a sequence of problems one observes for F4. To achieve high efficiency, M4GB has been designed specifically to operate only on tail-reduced polynomials, i.e., polynomials of which all terms except the leading term are non-reducible. This allows it to perform full-reduction directly in the computation of a term polynomial multiplication, where all computations are done over coefficient vectors over the non-reducible monomials. We have implemented a version of our new algorithm tailored for dense overdefined polynomial systems as a proof of concept and made our source code publicly available. We have made a comparison of our implementation against the implementations of FGBlib, Magma and OpenF4 on various dense Fukuoka MQ challenge problems that we were able to compute in reasonable time and memory. We observed that M4GB uses the least total CPU time and the least memory of all these implementations for those MQ problems, often by a significant factor. In the Fukuoka MQ challenges, the starting challenges of Type V and Type VI have 16 equations which was chosen based on an extrapolated computational runtime of more than a month using Magma. M4GB allowed us to set new records for these Fukuoka MQ challenges breaking Type V (F28) up to 18 equations and Type VI (F31) up to 19 equations, each can be computed within up to 11 days on our dual Xeon system.

Gangadhar, S., Sterbenz, J. P. G..  2017.  Machine learning aided traffic tolerance to improve resilience for software defined networks. 2017 9th International Workshop on Resilient Networks Design and Modeling (RNDM). :1–7.

Software Defined Networks (SDNs) have gained prominence recently due to their flexible management and superior configuration functionality of the underlying network. SDNs, with OpenFlow as their primary implementation, allow for the use of a centralised controller to drive the decision making for all the supported devices in the network and manage traffic through routing table changes for incoming flows. In conventional networks, machine learning has been shown to detect malicious intrusion, and classify attacks such as DoS, user to root, and probe attacks. In this work, we extend the use of machine learning to improve traffic tolerance for SDNs. To achieve this, we extend the functionality of the controller to include a resilience framework, ReSDN, that incorporates machine learning to be able to distinguish DoS attacks, focussing on a neptune attack for our experiments. Our model is trained using the MIT KDD 1999 dataset. The system is developed as a module on top of the POX controller platform and evaluated using the Mininet simulator.

Kosmidis, Konstantinos, Kalloniatis, Christos.  2017.  Machine Learning and Images for Malware Detection and Classification. Proceedings of the 21st Pan-Hellenic Conference on Informatics. :5:1–5:6.
Detecting malicious code with exact match on collected datasets is becoming a large-scale identification problem due to the existence of new malware variants. Being able to promptly and accurately identify new attacks enables security experts to respond effectively. My proposal is to develop an automated framework for identification of unknown vulnerabilities by leveraging current neural network techniques. This has a significant and immediate value for the security field, as current anti-virus software is typically able to recognize the malware type only after its infection, and preventive measures are limited. Artificial Intelligence plays a major role in automatic malware classification: numerous machine-learning methods, both supervised and unsupervised, have been researched to try classifying malware into families based on features acquired by static and dynamic analysis. The value of automated identification is clear, as feature engineering is both a time-consuming and time-sensitive task, with new malware studied while being observed in the wild.
Devarakonda, Ranjeet, Giansiracusa, Michael, Kumar, Jitendra.  2018.  Machine Learning and Social Media to Mine and Disseminate Big Scientific Data. 2018 IEEE International Conference on Big Data (Big Data). :5312—5315.

One of the challenges in supplying the communities with wider access to scientific databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it might not be practical to make the public aware of the structure of databases. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide a more intuitive method for generating database queries and delivering responses. Through social media, which makes it possible to interact with a wide section of the population, and with the help of natural language processing, researchers at the Atmospheric Radiation Measurement (ARM) Data Center at Oak Ridge National Laboratory (ORNL) have developed a concept to enable easy search and retrieval of data from several environmental data centers for the scientific community through social media.Using a machine learning framework that maps natural language text to thousands of datasets, instruments, variables, and data streams, the prototype system would allow users to request data through Twitter and receive a link (via tweet) to applicable data results on the project's search catalog tailored to their key words. This automated identification of relevant data from various petascale archives at ORNL could increase convenience, access, and use of the project's data by the broader community. In this paper we discuss how some data-intensive projects at ORNL are using innovative ways to help in data discovery.

Bouchlaghem, Rihab, Elkhelifi, Aymen, Faiz, Rim.  2016.  A Machine Learning Approach For Classifying Sentiments in Arabic Tweets. Proceedings of the 6th International Conference on Web Intelligence, Mining and Semantics. :24:1–24:6.

Nowadays, sentiment analysis methods become more and more popular especially with the proliferation of social media platform users number. In the same context, this paper presents a sentiment analysis approach which can faithfully translate the sentimental orientation of Arabic Twitter posts, based on a novel data representation and machine learning techniques. The proposed approach applied a wide range of features: lexical, surface-form, syntactic, etc. We also made use of lexicon features inferred from two Arabic sentiment words lexicons. To build our supervised sentiment analysis system, we use several standard classification methods (Support Vector Machines, K-Nearest Neighbour, Naïve Bayes, Decision Trees, Random Forest) known by their effectiveness over such classification issues. In our study, Support Vector Machines classifier outperforms other supervised algorithms in Arabic Twitter sentiment analysis. Via an ablation experiments, we show the positive impact of lexicon based features on providing higher prediction performance.

Chapla, Happy, Kotak, Riddhi, Joiser, Mittal.  2019.  A Machine Learning Approach for URL Based Web Phishing Using Fuzzy Logic as Classifier. 2019 International Conference on Communication and Electronics Systems (ICCES). :383—388.

Phishing is the major problem of the internet era. In this era of internet the security of our data in web is gaining an increasing importance. Phishing is one of the most harmful ways to unknowingly access the credential information like username, password or account number from the users. Users are not aware of this type of attack and later they will also become a part of the phishing attacks. It may be the losses of financial found, personal information, reputation of brand name or trust of brand. So the detection of phishing site is necessary. In this paper we design a framework of phishing detection using URL.

Ahmadi, Ali, Bidmeshki, Mohammad-Mahdi, Nahar, Amit, Orr, Bob, Pas, Michael, Makris, Yiorgos.  2016.  A Machine Learning Approach to Fab-of-origin Attestation. Proceedings of the 35th International Conference on Computer-Aided Design. :92:1–92:6.

We introduce a machine learning approach for distinguishing between integrated circuits fabricated in a ratified facility and circuits originating from an unknown or undesired source based on parametric measurements. Unlike earlier approaches, which seek to achieve the same objective in a general, design-independent manner, the proposed method leverages the interaction between the idiosyncrasies of the fabrication facility and a specific design, in order to create a customized fab-of-origin membership test for the circuit in question. Effectiveness of the proposed method is demonstrated using two large industrial datasets from a 65nm Texas Instruments RF transceiver manufactured in two different fabrication facilities.

Jo, Changyeon, Cho, Youngsu, Egger, Bernhard.  2017.  A Machine Learning Approach to Live Migration Modeling. Proceedings of the 2017 Symposium on Cloud Computing. :351–364.

Live migration is one of the key technologies to improve data center utilization, power efficiency, and maintenance. Various live migration algorithms have been proposed; each exhibiting distinct characteristics in terms of completion time, amount of data transferred, virtual machine (VM) downtime, and VM performance degradation. To make matters worse, not only the migration algorithm but also the applications running inside the migrated VM affect the different performance metrics. With service-level agreements and operational constraints in place, choosing the optimal live migration technique has so far been an open question. In this work, we propose an adaptive machine learning-based model that is able to predict with high accuracy the key characteristics of live migration in dependence of the migration algorithm and the workload running inside the VM. We discuss the important input parameters for accurately modeling the target metrics, and describe how to profile them with little overhead. Compared to existing work, we are not only able to model all commonly used migration algorithms but also predict important metrics that have not been considered so far such as the performance degradation of the VM. In a comparison with the state-of-the-art, we show that the proposed model outperforms existing work by a factor 2 to 5.

Ndichu, S., Ozawa, S., Misu, T., Okada, K..  2018.  A Machine Learning Approach to Malicious JavaScript Detection using Fixed Length Vector Representation. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

To add more functionality and enhance usability of web applications, JavaScript (JS) is frequently used. Even with many advantages and usefulness of JS, an annoying fact is that many recent cyberattacks such as drive-by-download attacks exploit vulnerability of JS codes. In general, malicious JS codes are not easy to detect, because they sneakily exploit vulnerabilities of browsers and plugin software, and attack visitors of a web site unknowingly. To protect users from such threads, the development of an accurate detection system for malicious JS is soliciting. Conventional approaches often employ signature and heuristic-based methods, which are prone to suffer from zero-day attacks, i.e., causing many false negatives and/or false positives. For this problem, this paper adopts a machine-learning approach to feature learning called Doc2Vec, which is a neural network model that can learn context information of texts. The extracted features are given to a classifier model (e.g., SVM and neural networks) and it judges the maliciousness of a JS code. In the performance evaluation, we use the D3M Dataset (Drive-by-Download Data by Marionette) for malicious JS codes and JSUPACK for benign ones for both training and test purposes. We then compare the performance to other feature learning methods. Our experimental results show that the proposed Doc2Vec features provide better accuracy and fast classification in malicious JS code detection compared to conventional approaches.

Wang, Qian, Gao, Mingze, Qu, Gang.  2018.  A Machine Learning Attack Resistant Dual-Mode PUF. Proceedings of the 2018 on Great Lakes Symposium on VLSI. :177-182.

Silicon Physical Unclonable Function (PUF) is arguably the most promising hardware security primitive. In particular, PUFs that are capable of generating a large amount of challenge response pairs (CRPs) can be used in many security applications. However, these CRPs can also be exploited by machine learning attacks to model the PUF and predict its response. In this paper, we first show that, based on data in the public domain, two popular PUFs that can generate CRPs (i.e., arbiter PUF and reconfigurable ring oscillator (RO) PUF) can be broken by simple logistic regression (LR) attack with about 99% accuracy. We then propose a feedback structure to XOR the PUF response with the challenge and challenge the PUF again to generate the response. Results show that this successfully reduces LR's learning accuracy to the lower 50%, but artificial neural network (ANN) learning attack still has an 80% success rate. Therefore, we propose a configurable ring oscillator based dual-mode PUF which works with both odd number of inverters (like the reconfigurable RO PUF) and even number of inverters (like a bistable ring (BR) PUF). Since currently there are no known attacks that can model both RO PUF and BR PUF, the dual-mode PUF will be resistant to modeling attacks as long as we can hide its working mode from the attackers, which we achieve with two practical methods. Finally, we implement the proposed dual-mode PUF on Nexys 4 FPGA boards and collect real measurement to show that it reduces the learning accuracy of LR and ANN to the mid-50% and low 60%, respectively. In addition, it meets the PUF requirements of uniqueness, randomness, and robustness.

Su, H., Zwolinski, M., Halak, B..  2018.  A Machine Learning Attacks Resistant Two Stage Physical Unclonable Functions Design. 2018 IEEE 3rd International Verification and Security Workshop (IVSW). :52-55.

Physical Unclonable Functions (PUFs) have been designed for many security applications such as identification, authentication of devices and key generation, especially for lightweight electronics. Traditional approaches to enhancing security, such as hash functions, may be expensive and resource dependent. However, modelling attacks using machine learning (ML) show the vulnerability of most PUFs. In this paper, a combination of a 32-bit current mirror and 16-bit arbiter PUFs in 65nm CMOS technology is proposed to improve resilience against modelling attacks. Both PUFs are vulnerable to machine learning attacks and we reduce the output prediction rate from 99.2% and 98.8% individually, to 60%.

Zhang, Kevin.  2019.  A Machine Learning Based Approach to Identify SQL Injection Vulnerabilities. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). :1286–1288.
This paper presents a machine learning classifier designed to identify SQL injection vulnerabilities in PHP code. Both classical and deep learning based machine learning algorithms were used to train and evaluate classifier models using input validation and sanitization features extracted from source code files. On ten-fold cross validations a model trained using Convolutional Neural Network(CNN) achieved the highest precision (95.4%), while a model based on Multilayer Perceptron(MLP) achieved the highest recall (63.7%) and the highest f-measure (0.746).
He, Z., Zhang, T., Lee, R. B..  2017.  Machine Learning Based DDoS Attack Detection from Source Side in Cloud. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :114–120.

Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks.

Wu, Q., Zhao, W..  2018.  Machine Learning Based Human Activity Detection in a Privacy-Aware Compliance Tracking System. 2018 IEEE International Conference on Electro/Information Technology (EIT). :0673–0676.

In this paper, we report our work on using machine learning techniques to predict back bending activity based on field data acquired in a local nursing home. The data are recorded by a privacy-aware compliance tracking system (PACTS). The objective of PACTS is to detect back-bending activities and issue real-time alerts to the participant when she bends her back excessively, which we hope could help the participant form good habits of using proper body mechanics when performing lifting/pulling tasks. We show that our algorithms can differentiate nursing staffs baseline and high-level bending activities by using human skeleton data without any expert rules.

Le, Duc C., Nur Zincir-Heywood, A..  2019.  Machine Learning Based Insider Threat Modelling and Detection. 2019 IFIP/IEEE Symposium on Integrated Network and Service Management (IM). :1–6.

Recently, malicious insider attacks represent one of the most damaging threats to companies and government agencies. This paper proposes a new framework in constructing a user-centered machine learning based insider threat detection system on multiple data granularity levels. System evaluations and analysis are performed not only on individual data instances but also on normal and malicious insiders, where insider scenario specific results and delay in detection are reported and discussed. Our results show that the machine learning based detection system can learn from limited ground truth and detect new malicious insiders with a high accuracy.

Halimaa A., Anish, Sundarakantham, K..  2019.  Machine Learning Based Intrusion Detection System. 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI). :916–920.

In order to examine malicious activity that occurs in a network or a system, intrusion detection system is used. Intrusion Detection is software or a device that scans a system or a network for a distrustful activity. Due to the growing connectivity between computers, intrusion detection becomes vital to perform network security. Various machine learning techniques and statistical methodologies have been used to build different types of Intrusion Detection Systems to protect the networks. Performance of an Intrusion Detection is mainly depends on accuracy. Accuracy for Intrusion detection must be enhanced to reduce false alarms and to increase the detection rate. In order to improve the performance, different techniques have been used in recent works. Analyzing huge network traffic data is the main work of intrusion detection system. A well-organized classification methodology is required to overcome this issue. This issue is taken in proposed approach. Machine learning techniques like Support Vector Machine (SVM) and Naïve Bayes are applied. These techniques are well-known to solve the classification problems. For evaluation of intrusion detection system, NSL- KDD knowledge discovery Dataset is taken. The outcomes show that SVM works better than Naïve Bayes. To perform comparative analysis, effective classification methods like Support Vector Machine and Naive Bayes are taken, their accuracy and misclassification rate get calculated.

Chandre, Pankaj Ramchandra, Mahalle, Parikshit Narendra, Shinde, Gitanjali Rahul.  2018.  Machine Learning Based Novel Approach for Intrusion Detection and Prevention System: A Tool Based Verification. 2018 IEEE Global Conference on Wireless Computing and Networking (GCWCN). :135–140.
Now a day, Wireless Sensor Networks are widely used in military applications by its applications, it is extended to healthcare, industrial environments and many more. As we know that, there are some unique features of WSNs such as limited power supply, minimum bandwidth and limited energy. So, to secure traditional network, multiple techniques are available, but we can't use same techniques to secure WSNs. So to increase the overall security of WSNs, we required new ideas as well as new approaches. In general, intrusion prevention is the primary issue in WSNs and intrusion detection already reached to saturation. Thus, we need an efficient solution for proactive intrusion prevention towards WSNs. Thus, formal validation of protocols in WSN is an essential area of research. This research paper aims to formally verify as well as model some protocol used for intrusion detection using AVISPA tool and HLPSL language. In this research paper, the results of authentication and DoS attacks were detected is presented, but there is a need to prevent such type of attacks. In this research paper, a system is proposed in order to avoid intrusion using machine learning for the wireless sensor network. So, the proposed system will be used for intrusion prevention in a wireless sensor network.
Hirano, Manabu, Kobayashi, Ryotaro.  2019.  Machine Learning Based Ransomware Detection Using Storage Access Patterns Obtained From Live-forensic Hypervisor. 2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS). :1–6.
With the rapid increase in the number of Internet of Things (IoT) devices, mobile devices, cloud services, and cyber-physical systems, the large-scale cyber attacks on enterprises and public sectors have increased. In particular, ransomware attacks damaged UK's National Health Service and many enterprises around the world in 2017. Therefore, researchers have proposed ransomware detection and prevention systems. However, manual inspection in static and dynamic ransomware analysis is time-consuming and it cannot cope with the rapid increase in variants of ransomware family. Recently, machine learning has been used to automate ransomware analysis by creating a behavioral model of same ransomware family. To create effective behavioral models of ransomware, we first obtained storage access patterns of live ransomware samples and of a benign application by using a live-forensic hypervisor called WaybackVisor. To distinguish ransomware from a benign application that has similar behavior to ransomware, we carefully selected five dimensional features that were extracted both from actual ransomware's Input and Output (I/O) logs and from a benign program's I/O logs. We created and evaluated machine learning models by using Random Forest, Support Vector Machine, and K-Nearest Neighbors. Our experiments using the proposed five features of storage access patterns achieved F-measure rate of 98%.
Anderson, Blake, McGrew, David.  2017.  Machine Learning for Encrypted Malware Traffic Classification: Accounting for Noisy Labels and Non-Stationarity. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1723–1732.

The application of machine learning for the detection of malicious network traffic has been well researched over the past several decades; it is particularly appealing when the traffic is encrypted because traditional pattern-matching approaches cannot be used. Unfortunately, the promise of machine learning has been slow to materialize in the network security domain. In this paper, we highlight two primary reasons why this is the case: inaccurate ground truth and a highly non-stationary data distribution. To demonstrate and understand the effect that these pitfalls have on popular machine learning algorithms, we design and carry out experiments that show how six common algorithms perform when confronted with real network data. With our experimental results, we identify the situations in which certain classes of algorithms underperform on the task of encrypted malware traffic classification. We offer concrete recommendations for practitioners given the real-world constraints outlined. From an algorithmic perspective, we find that the random forest ensemble method outperformed competing methods. More importantly, feature engineering was decisive; we found that iterating on the initial feature set, and including features suggested by domain experts, had a much greater impact on the performance of the classification system. For example, linear regression using the more expressive feature set easily outperformed the random forest method using a standard network traffic representation on all criteria considered. Our analysis is based on millions of TLS encrypted sessions collected over 12 months from a commercial malware sandbox and two geographically distinct, large enterprise networks.

Hussein, A., Salman, O., Chehab, A., Elhajj, I., Kayssi, A..  2019.  Machine Learning for Network Resiliency and Consistency. 2019 Sixth International Conference on Software Defined Systems (SDS). :146–153.

Being able to describe a specific network as consistent is a large step towards resiliency. Next to the importance of security lies the necessity of consistency verification. Attackers are currently focusing on targeting small and crutial goals such as network configurations or flow tables. These types of attacks would defy the whole purpose of a security system when built on top of an inconsistent network. Advances in Artificial Intelligence (AI) are playing a key role in ensuring a fast responce to the large number of evolving threats. Software Defined Networking (SDN), being centralized by design, offers a global overview of the network. Robustness and adaptability are part of a package offered by programmable networking, which drove us to consider the integration between both AI and SDN. The general goal of our series is to achieve an Artificial Intelligence Resiliency System (ARS). The aim of this paper is to propose a new AI-based consistency verification system, which will be part of ARS in our future work. The comparison of different deep learning architectures shows that Convolutional Neural Networks (CNN) give the best results with an accuracy of 99.39% on our dataset and 96% on our consistency test scenario.

Coleman, M. S., Doody, D. P., Shields, M. A..  2018.  Machine Learning for Real-Time Data-Driven Security Practices. 2018 29th Irish Signals and Systems Conference (ISSC). :1–6.

The risk of cyber-attacks exploiting vulnerable organisations has increased significantly over the past several years. These attacks may combine to exploit a vulnerability breach within a system's protection strategy, which has the potential for loss, damage or destruction of assets. Consequently, every vulnerability has an accompanying risk, which is defined as the "intersection of assets, threats, and vulnerabilities" [1]. This research project aims to experimentally compare the similarity-based ranking of cyber security information utilising a recommendation environment. The Memory-Based Collaborative Filtering technique was employed, specifically the User-Based and Item-Based approaches. These systems utilised information from the National Vulnerability Database, specifically for the identification and similarity-based ranking of cyber-security vulnerability information, relating to hardware and software applications. Experiments were performed using the Item-Based technique, to identify the optimum system parameters, evaluated through the AUC evaluation metric. Once identified, the Item-Based technique was compared with the User-Based technique which utilised the parameters identified from the previous experiments. During these experiments, the Pearson's Correlation Coefficient and the Cosine similarity measure was used. From these experiments, it was identified that utilised the Item-Based technique which employed the Cosine similarity measure, an AUC evaluation metric of 0.80225 was achieved.

Perez, R. Lopez, Adamsky, F., Soua, R., Engel, T..  2018.  Machine Learning for Reliable Network Attack Detection in SCADA Systems. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :633–638.

Critical Infrastructures (CIs) use Supervisory Control And Data Acquisition (SCADA) systems for remote control and monitoring. Sophisticated security measures are needed to address malicious intrusions, which are steadily increasing in number and variety due to the massive spread of connectivity and standardisation of open SCADA protocols. Traditional Intrusion Detection Systems (IDSs) cannot detect attacks that are not already present in their databases. Therefore, in this paper, we assess Machine Learning (ML) for intrusion detection in SCADA systems using a real data set collected from a gas pipeline system and provided by the Mississippi State University (MSU). The contribution of this paper is two-fold: 1) The evaluation of four techniques for missing data estimation and two techniques for data normalization, 2) The performances of Support Vector Machine (SVM), and Random Forest (RF) are assessed in terms of accuracy, precision, recall and F1score for intrusion detection. Two cases are differentiated: binary and categorical classifications. Our experiments reveal that RF detect intrusions effectively, with an F1score of respectively \textbackslashtextgreater 99%.

Cammarota, Rosario, Banerjee, Indranil, Rosenberg, Ofer.  2018.  Machine Learning IP Protection. 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). :1—3.
Machine learning, specifically deep learning is becoming a key technology component in application domains such as identity management, finance, automotive, and healthcare, to name a few. Proprietary machine learning models - Machine Learning IP - are developed and deployed at the network edge, end devices and in the cloud, to maximize user experience. With the proliferation of applications embedding Machine Learning IPs, machine learning models and hyper-parameters become attractive to attackers, and require protection. Major players in the semiconductor industry provide mechanisms on device to protect the IP at rest and during execution from being copied, altered, reverse engineered, and abused by attackers. In this work we explore system security architecture mechanisms and their applications to Machine Learning IP protection.