Visible to the public Biblio

Found 435 results

Filters: Keyword is machine learning  [Clear All Filters]
2020-09-18
Zolanvari, Maede, Teixeira, Marcio A., Gupta, Lav, Khan, Khaled M., Jain, Raj.  2019.  Machine Learning-Based Network Vulnerability Analysis of Industrial Internet of Things. IEEE Internet of Things Journal. 6:6822—6834.
It is critical to secure the Industrial Internet of Things (IIoT) devices because of potentially devastating consequences in case of an attack. Machine learning (ML) and big data analytics are the two powerful leverages for analyzing and securing the Internet of Things (IoT) technology. By extension, these techniques can help improve the security of the IIoT systems as well. In this paper, we first present common IIoT protocols and their associated vulnerabilities. Then, we run a cyber-vulnerability assessment and discuss the utilization of ML in countering these susceptibilities. Following that, a literature review of the available intrusion detection solutions using ML models is presented. Finally, we discuss our case study, which includes details of a real-world testbed that we have built to conduct cyber-attacks and to design an intrusion detection system (IDS). We deploy backdoor, command injection, and Structured Query Language (SQL) injection attacks against the system and demonstrate how a ML-based anomaly detection system can perform well in detecting these attacks. We have evaluated the performance through representative metrics to have a fair point of view on the effectiveness of the methods.
2020-09-14
Ortiz Garcés, Ivan, Cazares, Maria Fernada, Andrade, Roberto Omar.  2019.  Detection of Phishing Attacks with Machine Learning Techniques in Cognitive Security Architecture. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :366–370.
The number of phishing attacks has increased in Latin America, exceeding the operational skills of cybersecurity analysts. The cognitive security application proposes the use of bigdata, machine learning, and data analytics to improve response times in attack detection. This paper presents an investigation about the analysis of anomalous behavior related with phishing web attacks and how machine learning techniques can be an option to face the problem. This analysis is made with the use of an contaminated data sets, and python tools for developing machine learning for detect phishing attacks through of the analysis of URLs to determinate if are good or bad URLs in base of specific characteristics of the URLs, with the goal of provide realtime information for take proactive decisions that minimize the impact of an attack.
2020-09-11
Garip, Mevlut Turker, Lin, Jonathan, Reiher, Peter, Gerla, Mario.  2019.  SHIELDNET: An Adaptive Detection Mechanism against Vehicular Botnets in VANETs. 2019 IEEE Vehicular Networking Conference (VNC). :1—7.
Vehicular ad hoc networks (VANETs) are designed to provide traffic safety by enabling vehicles to broadcast information-such as speed, location and heading-through inter-vehicular communications to proactively avoid collisions. However, the attacks targeting these networks might overshadow their advantages if not protected against. One powerful threat against VANETs is vehicular botnets. In our earlier work, we demonstrated several vehicular botnet attacks that can have damaging impacts on the security and privacy of VANETs. In this paper, we present SHIELDNET, the first detection mechanism against vehicular botnets. Similar to the detection approaches against Internet botnets, we target the vehicular botnet communication and use several machine learning techniques to identify vehicular bots. We show via simulation that SHIELDNET can identify 77 percent of the vehicular bots. We propose several improvements on the VANET standards and show that their existing vulnerabilities make an effective defense against vehicular botnets infeasible.
Azakami, Tomoka, Shibata, Chihiro, Uda, Ryuya, Kinoshita, Toshiyuki.  2019.  Creation of Adversarial Examples with Keeping High Visual Performance. 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). :52—56.
The accuracy of the image classification by the convolutional neural network is exceeding the ability of human being and contributes to various fields. However, the improvement of the image recognition technology gives a great blow to security system with an image such as CAPTCHA. In particular, since the character string CAPTCHA has already added distortion and noise in order not to be read by the computer, it becomes a problem that the human readability is lowered. Adversarial examples is a technique to produce an image letting an image classification by the machine learning be wrong intentionally. The best feature of this technique is that when human beings compare the original image with the adversarial examples, they cannot understand the difference on appearance. However, Adversarial examples that is created with conventional FGSM cannot completely misclassify strong nonlinear networks like CNN. Osadchy et al. have researched to apply this adversarial examples to CAPTCHA and attempted to let CNN misclassify them. However, they could not let CNN misclassify character images. In this research, we propose a method to apply FGSM to the character string CAPTCHAs and to let CNN misclassified them.
Shekhar, Heemany, Moh, Melody, Moh, Teng-Sheng.  2019.  Exploring Adversaries to Defend Audio CAPTCHA. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1155—1161.
CAPTCHA is a web-based authentication method used by websites to distinguish between humans (valid users) and bots (attackers). Audio captcha is an accessible captcha meant for the visually disabled section of users such as color-blind, blind, near-sighted users. Firstly, this paper analyzes how secure current audio captchas are from attacks using machine learning (ML) and deep learning (DL) models. Each audio captcha is made up of five, seven or ten random digits[0-9] spoken one after the other along with varying background noise throughout the length of the audio. If the ML or DL model is able to correctly identify all spoken digits and in the correct order of occurance in a single audio captcha, we consider that captcha to be broken and the attack to be successful. Throughout the paper, accuracy refers to the attack model's success at breaking audio captchas. The higher the attack accuracy, the more unsecure the audio captchas are. In our baseline experiments, we found that attack models could break audio captchas that had no background noise or medium background noise with any number of spoken digits with nearly 99% to 100% accuracy. Whereas, audio captchas with high background noise were relatively more secure with attack accuracy of 85%. Secondly, we propose that the concepts of adversarial examples algorithms can be used to create a new kind of audio captcha that is more resilient towards attacks. We found that even after retraining the models on the new adversarial audio data, the attack accuracy remained as low as 25% to 36% only. Lastly, we explore the benefits of creating adversarial audio captcha through different algorithms such as Basic Iterative Method (BIM) and deepFool. We found that as long as the attacker has less than 45% sample from each kinds of adversarial audio datasets, the defense will be successful at preventing attacks.
2020-09-04
Elkanishy, Abdelrahman, Badawy, Abdel-Hameed A., Furth, Paul M., Boucheron, Laura E., Michael, Christopher P..  2019.  Machine Learning Bluetooth Profile Operation Verification via Monitoring the Transmission Pattern. 2019 53rd Asilomar Conference on Signals, Systems, and Computers. :2144—2148.
Manufacturers often buy and/or license communication ICs from third-party suppliers. These communication ICs are then integrated into a complex computational system, resulting in a wide range of potential hardware-software security issues. This work proposes a compact supervisory circuit to classify the Bluetooth profile operation of a Bluetooth System-on-Chip (SoC) at low frequencies by monitoring the radio frequency (RF) output power of the Bluetooth SoC. The idea is to inexpensively manufacture an RF envelope detector to monitor the RF output power and a profile classification algorithm on a custom low-frequency integrated circuit in a low-cost legacy technology. When the supervisory circuit observes unexpected behavior, it can shut off power to the Bluetooth SoC. In this preliminary work, we proto-type the supervisory circuit using off-the-shelf components to collect a sufficient data set to train 11 different Machine Learning models. We extract smart descriptive time-domain features from the envelope of the RF output signal. Then, we train the machine learning models to classify three different Bluetooth operation profiles: sensor, hands-free, and headset. Our results demonstrate 100% classification accuracy with low computational complexity.
Khan, Aasher, Rehman, Suriya, Khan, Muhammad U.S, Ali, Mazhar.  2019.  Synonym-based Attack to Confuse Machine Learning Classifiers Using Black-box Setting. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1—7.
Twitter being the most popular content sharing platform is giving rise to automated accounts called “bots”. Majority of the users on Twitter are bots. Various machine learning (ML) algorithms are designed to detect bots avoiding the vulnerability constraints of ML-based models. This paper contributes to exploit vulnerabilities of machine learning (ML) algorithms through black-box attack. An adversarial text sequence misclassifies the results of deep learning (DL) classifiers for bot detection. Literature shows that ML models are vulnerable to attacks. The aim of this paper is to compromise the accuracy of ML-based bot detection algorithms by replacing original words in tweets with their synonyms. Our results show 7.2% decrease in the accuracy for bot tweets, therefore classifying bot tweets as legitimate tweets.
Usama, Muhammad, Qayyum, Adnan, Qadir, Junaid, Al-Fuqaha, Ala.  2019.  Black-box Adversarial Machine Learning Attack on Network Traffic Classification. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :84—89.
Deep machine learning techniques have shown promising results in network traffic classification, however, the robustness of these techniques under adversarial threats is still in question. Deep machine learning models are found vulnerable to small carefully crafted adversarial perturbations posing a major question on the performance of deep machine learning techniques. In this paper, we propose a black-box adversarial attack on network traffic classification. The proposed attack successfully evades deep machine learning-based classifiers which highlights the potential security threat of using deep machine learning techniques to realize autonomous networks.
Kanemura, Kota, Toyoda, Kentaroh, Ohtsuki, Tomoaki.  2019.  Identification of Darknet Markets’ Bitcoin Addresses by Voting Per-address Classification Results. 2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC). :154—158.
Bitcoin is a decentralized digital currency whose transactions are recorded in a common ledger, so called blockchain. Due to the anonymity and lack of law enforcement, Bitcoin has been misused in darknet markets which deal with illegal products, such as drugs and weapons. Therefore from the security forensics aspect, it is demanded to establish an approach to identify newly emerged darknet markets' transactions and addresses. In this paper, we thoroughly analyze Bitcoin transactions and addresses related to darknet markets and propose a novel identification method of darknet markets' addresses. To improve the identification performance, we propose a voting based method which decides the labels of multiple addresses controlled by the same user based on the number of the majority label. Through the computer simulation with more than 200K Bitcoin addresses, it was shown that our voting based method outperforms the nonvoting based one in terms of precision, recal, and F1 score. We also found that DNM's addresses pay higher fees than others, which significantly improves the classification.
Laguduva, Vishalini, Islam, Sheikh Ariful, Aakur, Sathyanarayanan, Katkoori, Srinivas, Karam, Robert.  2019.  Machine Learning Based IoT Edge Node Security Attack and Countermeasures. 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :670—675.
Advances in technology have enabled tremendous progress in the development of a highly connected ecosystem of ubiquitous computing devices collectively called the Internet of Things (IoT). Ensuring the security of IoT devices is a high priority due to the sensitive nature of the collected data. Physically Unclonable Functions (PUFs) have emerged as critical hardware primitive for ensuring the security of IoT nodes. Malicious modeling of PUF architectures has proven to be difficult due to the inherently stochastic nature of PUF architectures. Extant approaches to malicious PUF modeling assume that a priori knowledge and physical access to the PUF architecture is available for malicious attack on the IoT node. However, many IoT networks make the underlying assumption that the PUF architecture is sufficiently tamper-proof, both physically and mathematically. In this work, we show that knowledge of the underlying PUF structure is not necessary to clone a PUF. We present a novel non-invasive, architecture independent, machine learning attack for strong PUF designs with a cloning accuracy of 93.5% and improvements of up to 48.31% over an alternative, two-stage brute force attack model. We also propose a machine-learning based countermeasure, discriminator, which can distinguish cloned PUF devices and authentic PUFs with an average accuracy of 96.01%. The proposed discriminator can be used for rapidly authenticating millions of IoT nodes remotely from the cloud server.
Sutton, Sara, Bond, Benjamin, Tahiri, Sementa, Rrushi, Julian.  2019.  Countering Malware Via Decoy Processes with Improved Resource Utilization Consistency. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :110—119.
The concept of a decoy process is a new development of defensive deception beyond traditional honeypots. Decoy processes can be exceptionally effective in detecting malware, directly upon contact or by redirecting malware to decoy I/O. A key requirement is that they resemble their real counterparts very closely to withstand adversarial probes by threat actors. To be usable, decoy processes need to consume only a small fraction of the resources consumed by their real counterparts. Our contribution in this paper is twofold. We attack the resource utilization consistency of decoy processes provided by a neural network with a heatmap training mechanism, which we find to be insufficiently trained. We then devise machine learning over control flow graphs that improves the heatmap training mechanism. A neural network retrained by our work shows higher accuracy and defeats our attacks without a significant increase in its own resource utilization.
2020-08-28
Hasanin, Tawfiq, Khoshgoftaar, Taghi M., Leevy, Joffrey L..  2019.  A Comparison of Performance Metrics with Severely Imbalanced Network Security Big Data. 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI). :83—88.
Severe class imbalance between the majority and minority classes in large datasets can prejudice Machine Learning classifiers toward the majority class. Our work uniquely consolidates two case studies, each utilizing three learners implemented within an Apache Spark framework, six sampling methods, and five sampling distribution ratios to analyze the effect of severe class imbalance on big data analytics. We use three performance metrics to evaluate this study: Area Under the Receiver Operating Characteristic Curve, Area Under the Precision-Recall Curve, and Geometric Mean. In the first case study, models were trained on one dataset (POST) and tested on another (SlowlorisBig). In the second case study, the training and testing dataset roles were switched. Our comparison of performance metrics shows that Area Under the Precision-Recall Curve and Geometric Mean are sensitive to changes in the sampling distribution ratio, whereas Area Under the Receiver Operating Characteristic Curve is relatively unaffected. In addition, we demonstrate that when comparing sampling methods, borderline-SMOTE2 outperforms the other methods in the first case study, and Random Undersampling is the top performer in the second case study.
Huang, Angus F.M., Chi-Wei, Yang, Tai, Hsiao-Chi, Chuan, Yang, Huang, Jay J.C., Liao, Yu-Han.  2019.  Suspicious Network Event Recognition Using Modified Stacking Ensemble Machine Learning. 2019 IEEE International Conference on Big Data (Big Data). :5873—5880.
This study aims to detect genuine suspicious events and false alarms within a dataset of network traffic alerts. The rapid development of cloud computing and artificial intelligence-oriented automatic services have enabled a large amount of data and information to be transmitted among network nodes. However, the amount of cyber-threats, cyberattacks, and network intrusions have increased in various domains of network environments. Based on the fields of data science and machine learning, this paper proposes a series of solutions involving data preprocessing, exploratory data analysis, new features creation, features selection, ensemble learning, models construction, and verification to identify suspicious network events. This paper proposes a modified form of stacking ensemble machine learning which includes AdaBoost, Neural Networks, Random Forest, LightGBM, and Extremely Randomised Trees (Extra Trees) to realise a high-performance classification. A suspicious network event recognition dataset for a security operations centre, which uses real network log observations from the 2019 IEEE BigData Cup Challenge, is used as an experimental dataset. This paper investigates the possibility of integrating big-data analytics, machine learning, and data science to improve intelligent cybersecurity.
Mulinka, Pavol, Casas, Pedro, Vanerio, Juan.  2019.  Continuous and Adaptive Learning over Big Streaming Data for Network Security. 2019 IEEE 8th International Conference on Cloud Networking (CloudNet). :1—4.
Continuous and adaptive learning is an effective learning approach when dealing with highly dynamic and changing scenarios, where concept drift often happens. In a continuous, stream or adaptive learning setup, new measurements arrive continuously and there are no boundaries for learning, meaning that the learning model has to decide how and when to (re)learn from these new data constantly. We address the problem of adaptive and continual learning for network security, building dynamic models to detect network attacks in real network traffic. The combination of fast and big network measurements data with the re-training paradigm of adaptive learning imposes complex challenges in terms of data processing speed, which we tackle by relying on big data platforms for parallel stream processing. We build and benchmark different adaptive learning models on top of a novel big data analytics platform for network traffic monitoring and analysis tasks, and show that high speed-up computations (as high as × 6) can be achieved by parallelizing off-the-shelf stream learning approaches.
Parafita, Álvaro, Vitrià, Jordi.  2019.  Explaining Visual Models by Causal Attribution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :4167—4175.

Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.

Perry, Lior, Shapira, Bracha, Puzis, Rami.  2019.  NO-DOUBT: Attack Attribution Based On Threat Intelligence Reports. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :80—85.

The task of attack attribution, i.e., identifying the entity responsible for an attack, is complicated and usually requires the involvement of an experienced security expert. Prior attempts to automate attack attribution apply various machine learning techniques on features extracted from the malware's code and behavior in order to identify other similar malware whose authors are known. However, the same malware can be reused by multiple actors, and the actor who performed an attack using a malware might differ from the malware's author. Moreover, information collected during an incident may contain many clues about the identity of the attacker in addition to the malware used. In this paper, we propose a method of attack attribution based on textual analysis of threat intelligence reports, using state of the art algorithms and models from the fields of machine learning and natural language processing (NLP). We have developed a new text representation algorithm which captures the context of the words and requires minimal feature engineering. Our approach relies on vector space representation of incident reports derived from a small collection of labeled reports and a large corpus of general security literature. Both datasets have been made available to the research community. Experimental results show that the proposed representation can attribute attacks more accurately than the baselines' representations. In addition, we show how the proposed approach can be used to identify novel previously unseen threat actors and identify similarities between known threat actors.

Traylor, Terry, Straub, Jeremy, Gurmeet, Snell, Nicholas.  2019.  Classifying Fake News Articles Using Natural Language Processing to Identify In-Article Attribution as a Supervised Learning Estimator. 2019 IEEE 13th International Conference on Semantic Computing (ICSC). :445—449.

Intentionally deceptive content presented under the guise of legitimate journalism is a worldwide information accuracy and integrity problem that affects opinion forming, decision making, and voting patterns. Most so-called `fake news' is initially distributed over social media conduits like Facebook and Twitter and later finds its way onto mainstream media platforms such as traditional television and radio news. The fake news stories that are initially seeded over social media platforms share key linguistic characteristics such as making excessive use of unsubstantiated hyperbole and non-attributed quoted content. In this paper, the results of a fake news identification study that documents the performance of a fake news classifier are presented. The Textblob, Natural Language, and SciPy Toolkits were used to develop a novel fake news detector that uses quoted attribution in a Bayesian machine learning system as a key feature to estimate the likelihood that a news article is fake. The resultant process precision is 63.333% effective at assessing the likelihood that an article with quotes is fake. This process is called influence mining and this novel technique is presented as a method that can be used to enable fake news and even propaganda detection. In this paper, the research process, technical analysis, technical linguistics work, and classifier performance and results are presented. The paper concludes with a discussion of how the current system will evolve into an influence mining system.

2020-08-24
Raghavan, Pradheepan, Gayar, Neamat El.  2019.  Fraud Detection using Machine Learning and Deep Learning. 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). :334–339.
Frauds are known to be dynamic and have no patterns, hence they are not easy to identify. Fraudsters use recent technological advancements to their advantage. They somehow bypass security checks, leading to the loss of millions of dollars. Analyzing and detecting unusual activities using data mining techniques is one way of tracing fraudulent transactions. transactions. This paper aims to benchmark multiple machine learning methods such as k-nearest neighbor (KNN), random forest and support vector machines (SVM), while the deep learning methods such as autoencoders, convolutional neural networks (CNN), restricted boltzmann machine (RBM) and deep belief networks (DBN). The datasets which will be used are the European (EU) Australian and German dataset. The Area Under the ROC Curve (AUC), Matthews Correlation Coefficient (MCC) and Cost of failure are the 3-evaluation metrics that would be used.
Thirumaran, M., Moshika, A., Padmanaban, R..  2019.  Hybrid Model for Web Application Vulnerability Assessment Using Decision Tree and Bayesian Belief Network. 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN). :1–7.
In the existing situation, most of the business process are running through web applications. This helps the enterprises to grow their business efficiently which creates a good consumer relationship. But the main problem is that they failed to provide a vulnerable free environment. To overcome this issue in web applications, vulnerability assessment should be made periodically. They are many vulnerability assessment methodologies which occur earlier are not much proactive. So, machine learning is needed to provide a combined solution to determine vulnerability occurrence and percentage of vulnerability occurred in logical web pages. We use Decision Tree and Bayesian Belief Network (BBN) as a collective solution to find either vulnerability occur in web applications and the vulnerability occurred percentage on different logical web pages.
Starke, Allen, Nie, Zixiang, Hodges, Morgan, Baker, Corey, McNair, Janise.  2019.  Denial of Service Detection Mitigation Scheme using Responsive Autonomic Virtual Networks (RAvN). MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–6.
In this paper we propose a responsive autonomic and data-driven adaptive virtual networking framework (RAvN) that integrates the adaptive reconfigurable features of a popular SDN platform called open networking operating system (ONOS), the network performance statistics provided by traffic monitoring tools such as T-shark or sflow-RT and analytics and decision making skills provided from new and current machine learning techniques to detect and mitigate anomalous behavior. For this paper we focus on the development of novel detection schemes using a developed Centroid-based clustering technique and the Intragroup variance of data features within network traffic (C. Intra), with a multivariate gaussian distribution model fitted to the constant changes in the IP addresses of the network to accurately assist in the detection of low rate and high rate denial of service (DoS) attacks. We briefly discuss our ideas on the development of the decision-making and execution component using the concept of generating adaptive policy updates (i.e. anomalous mitigation solutions) on-the-fly to the ONOS SDN controller for updating network configurations and flows. In addition we provide the analysis on anomaly detection schemes used for detecting low rate and high rate DoS attacks versus a commonly used unsupervised machine learning technique Kmeans. The proposed schemes outperformed Kmeans significantly. The multivariate clustering method and the intragroup variance recorded 80.54% and 96.13% accuracy respectively while Kmeans recorded 72.38% accuracy.
Fargo, Farah, Franza, Olivier, Tunc, Cihan, Hariri, Salim.  2019.  Autonomic Resource Management for Power, Performance, and Security in Cloud Environment. 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA). :1–4.
High performance computing is widely used for large-scale simulations, designs and analysis of critical problems especially through the use of cloud computing systems nowadays because cloud computing provides ubiquitous, on-demand computing capabilities with large variety of hardware configurations including GPUs and FPGAs that are highly used for high performance computing. However, it is well known that inefficient management of such systems results in excessive power consumption affecting the budget, cooling challenges, as well as reducing reliability due to the overheating and hotspots. Furthermore, considering the latest trends in the attack scenarios and crypto-currency based intrusions, security has become a major problem for high performance computing. Therefore, to address both challenges, in this paper we present an autonomic management methodology for both security and power/performance. Our proposed approach first builds knowledge of the environment in terms of power consumption and the security tools' deployment. Next, it provisions virtual resources so that the power consumption can be reduced while maintaining the required performance and deploy the security tools based on the system behavior. Using this approach, we can utilize a wide range of secure resources efficiently in HPC system, cloud computing systems, servers, embedded systems, etc.
2020-08-17
Hu, Jianxing, Huo, Dongdong, Wang, Meilin, Wang, Yazhe, Zhang, Yan, Li, Yu.  2019.  A Probability Prediction Based Mutable Control-Flow Attestation Scheme on Embedded Platforms. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :530–537.
Control-flow attacks cause powerful threats to the software integrity. Remote attestation for control flow is a crucial security service for ensuring the software integrity on embedded platforms. The fine-grained remote control-flow attestation with execution-profiling Control-Flow Graph (CFG) is applied to defend against control-flow attacks. It is a safe scheme but it may influence the runtime efficiency. In fact, we find out only the vulnerable parts of a program need being attested at costly fine-grained level to ensure the security, and the remaining normal parts just need a lightweight coarse-grained check to reduce the overhead. We propose Mutable Granularity Control-Flow Attestation (MGC-FA) scheme, which bases on a probabilistic model, to distinguish between the vulnerable and normal parts in the program and combine fine-grained and coarse-grained control-flow attestation schemes. MGC-FA employs the execution-profiling CFG to apply the remote control-flow attestation scheme on embedded devices. MGC-FA is implemented on Raspberry Pi with ARM TrustZone and the experimental results show its effect on balancing the relationship between runtime efficiency and control-flow security.
2020-08-13
Zhang, Yueqian, Kantarci, Burak.  2019.  Invited Paper: AI-Based Security Design of Mobile Crowdsensing Systems: Review, Challenges and Case Studies. 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE). :17—1709.
Mobile crowdsensing (MCS) is a distributed sensing paradigm that uses a variety of built-in sensors in smart mobile devices to enable ubiquitous acquisition of sensory data from surroundings. However, non-dedicated nature of MCS results in vulnerabilities in the presence of malicious participants to compromise the availability of the MCS components, particularly the servers and participants' devices. In this paper, we focus on Denial of Service attacks in MCS where malicious participants submit illegitimate task requests to the MCS platform to keep MCS servers busy while having sensing devices expend energy needlessly. After reviewing Artificial Intelligence-based security solutions for MCS systems, we focus on a typical location-based and energy-oriented DoS attack, and present a security solution that applies ensemble techniques in machine learning to identify illegitimate tasks and prevent personal devices from pointless energy consumption so as to improve the availability of the whole system. Through simulations, we show that ensemble techniques are capable of identifying illegitimate and legitimate tasks while gradient boosting appears to be a preferable solution with an AUC performance higher than 0.88 in the precision-recall curve. We also investigate the impact of environmental settings on the detection performance so as to provide a clearer understanding of the model. Our performance results show that MCS task legitimacy decisions with high F-scores are possible for both illegitimate and legitimate tasks.
Sadeghi, Koosha, Banerjee, Ayan, Gupta, Sandeep K. S..  2019.  An Analytical Framework for Security-Tuning of Artificial Intelligence Applications Under Attack. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :111—118.
Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.
2020-08-10
Wasi, Sarwar, Shams, Sarmad, Nasim, Shahzad, Shafiq, Arham.  2019.  Intrusion Detection Using Deep Learning and Statistical Data Analysis. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1–5.
Innovation and creativity have played an important role in the development of every field of life, relatively less but it has created several problems too. Intrusion detection is one of those problems which became difficult with the advancement in computer networks, multiple researchers with multiple techniques have come forward to solve this crucial issue, but network security is still a challenge. In our research, we have come across an idea to detect intrusion using a deep learning algorithm in combination with statistical data analysis of KDD cup 99 datasets. Firstly, we have applied statistical analysis on the given data set to generate a simplified form of data, so that a less complex binary classification model of artificial neural network could apply for data classification. Our system has decreased the complexity of the system and has improved the response time.