Visible to the public Biblio

Found 600 results

Filters: Keyword is machine learning  [Clear All Filters]
2020-12-02
Abeysekara, P., Dong, H., Qin, A. K..  2019.  Machine Learning-Driven Trust Prediction for MEC-Based IoT Services. 2019 IEEE International Conference on Web Services (ICWS). :188—192.

We propose a distributed machine-learning architecture to predict trustworthiness of sensor services in Mobile Edge Computing (MEC) based Internet of Things (IoT) services, which aligns well with the goals of MEC and requirements of modern IoT systems. The proposed machine-learning architecture models training a distributed trust prediction model over a topology of MEC-environments as a Network Lasso problem, which allows simultaneous clustering and optimization on large-scale networked-graphs. We then attempt to solve it using Alternate Direction Method of Multipliers (ADMM) in a way that makes it suitable for MEC-based IoT systems. We present analytical and simulation results to show the validity and efficiency of the proposed solution.

2020-12-01
Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N..  2019.  DeepRing: Protecting Deep Neural Network With Blockchain. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2821—2828.

Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions and optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, with the advent of "blockchain", the cybersecurity industry has developed a new sense of trust which was earlier missing from both the technical and commercial perspectives. Employment of cryptographic hash as well as symmetric/asymmetric encryption and decryption algorithms ensure security without any human intervention (i.e., centralized authority). In this research, we present the synergy between the best of both these worlds. We first propose a model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology. As the second contribution of the proposed research, a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.

Usama, M., Asim, M., Latif, S., Qadir, J., Ala-Al-Fuqaha.  2019.  Generative Adversarial Networks For Launching and Thwarting Adversarial Attacks on Network Intrusion Detection Systems. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :78—83.

Intrusion detection systems (IDSs) are an essential cog of the network security suite that can defend the network from malicious intrusions and anomalous traffic. Many machine learning (ML)-based IDSs have been proposed in the literature for the detection of malicious network traffic. However, recent works have shown that ML models are vulnerable to adversarial perturbations through which an adversary can cause IDSs to malfunction by introducing a small impracticable perturbation in the network traffic. In this paper, we propose an adversarial ML attack using generative adversarial networks (GANs) that can successfully evade an ML-based IDS. We also show that GANs can be used to inoculate the IDS and make it more robust to adversarial perturbations.

Abdulhammed, R., Faezipour, M., Musafer, H., Abuzneid, A..  2019.  Efficient Network Intrusion Detection Using PCA-Based Dimensionality Reduction of Features. 2019 International Symposium on Networks, Computers and Communications (ISNCC). :1—6.

Designing a machine learning based network intrusion detection system (IDS) with high-dimensional features can lead to prolonged classification processes. This is while low-dimensional features can reduce these processes. Moreover, classification of network traffic with imbalanced class distributions has posed a significant drawback on the performance attainable by most well-known classifiers. With the presence of imbalanced data, the known metrics may fail to provide adequate information about the performance of the classifier. This study first uses Principal Component Analysis (PCA) as a feature dimensionality reduction approach. The resulting low-dimensional features are then used to build various classifiers such as Random Forest (RF), Bayesian Network, Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) for designing an IDS. The experimental findings with low-dimensional features in binary and multi-class classification show better performance in terms of Detection Rate (DR), F-Measure, False Alarm Rate (FAR), and Accuracy. Furthermore, in this paper, we apply a Multi-Class Combined performance metric Combi ned Mc with respect to class distribution through incorporating FAR, DR, Accuracy, and class distribution parameters. In addition, we developed a uniform distribution based balancing approach to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset. We were able to reduce the CICIDS2017 dataset's feature dimensions from 81 to 10 using PCA, while maintaining a high accuracy of 99.6% in multi-class and binary classification.

2020-11-30
Stokes, J. W., Agrawal, R., McDonald, G., Hausknecht, M..  2019.  ScriptNet: Neural Static Analysis for Malicious JavaScript Detection. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–8.
Malicious scripts are an important computer infection threat vector for computer users. For internet-scale processing, static analysis offers substantial computing efficiencies. We propose the ScriptNet system for neural malicious JavaScript detection which is based on static analysis. We also propose a novel deep learning model, Pre-Informant Learning (PIL), which processes Javascript files as byte sequences. Lower layers capture the sequential nature of these byte sequences while higher layers classify the resulting embedding as malicious or benign. Unlike previously proposed solutions, our model variants are trained in an end-to-end fashion allowing discriminative training even for the sequential processing layers. Evaluating this model on a large corpus of 212,408 JavaScript files indicates that the best performing PIL model offers a 98.10% true positive rate (TPR) for the first 60K byte subsequences and 81.66% for the full-length files, at a false positive rate (FPR) of 0.50%. Both models significantly outperform several baseline models. The best performing PIL model can successfully detect 92.02% of unknown malware samples in a hindsight experiment where the true labels of the malicious JavaScript files were not known when the model was trained.
2020-11-23
Zhu, L., Dong, H., Shen, M., Gai, K..  2019.  An Incentive Mechanism Using Shapley Value for Blockchain-Based Medical Data Sharing. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :113–118.
With the development of big data and machine learning techniques, medical data sharing for the use of disease diagnosis has received considerable attention. Blockchain, as an emerging technology, has been widely used to resolve the efficiency and security issues in medical data sharing. However, the existing studies on blockchain-based medical data sharing have rarely concerned about the reasonable incentive mechanism. In this paper, we propose a cooperation model where medical data is shared via blockchain. We derive the topological relationships among the participants consisting of data owners, miners and third parties, and gradually develop the computational process of Shapley value revenue distribution. Specifically, we explore the revenue distribution under different consensuses of blockchain. Finally, we demonstrate the incentive effect and rationality of the proposed solution by analyzing the revenue distribution.
Ramapatruni, S., Narayanan, S. N., Mittal, S., Joshi, A., Joshi, K..  2019.  Anomaly Detection Models for Smart Home Security. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :19–24.
Recent years have seen significant growth in the adoption of smart homes devices. These devices provide convenience, security, and energy efficiency to users. For example, smart security cameras can detect unauthorized movements, and smoke sensors can detect potential fire accidents. However, many recent examples have shown that they open up a new cyber threat surface. There have been several recent examples of smart devices being hacked for privacy violations and also misused so as to perform DDoS attacks. In this paper, we explore the application of big data and machine learning to identify anomalous activities that can occur in a smart home environment. A Hidden Markov Model (HMM) is trained on network level sensor data, created from a test bed with multiple sensors and smart devices. The generated HMM model is shown to achieve an accuracy of 97% in identifying potential anomalies that indicate attacks. We present our approach to build this model and compare with other techniques available in the literature.
Awaysheh, F., Cabaleiro, J. C., Pena, T. F., Alazab, M..  2019.  Big Data Security Frameworks Meet the Intelligent Transportation Systems Trust Challenges. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :807–813.
Many technological cases exploiting data science have been realized in recent years; machine learning, Internet of Things, and stream data processing are examples of this trend. Other advanced applications have focused on capturing the value from streaming data of different objects of transport and traffic management in an Intelligent Transportation System (ITS). In this context, security control and trust level play a decisive role in the sustainable adoption of this trend. However, conceptual work integrating the security approaches of different disciplines into one coherent reference architecture is limited. The contribution of this paper is a reference architecture for ITS security (called SITS). In addition, a classification of Big Data technologies, products, and services to address the ITS trust challenges is presented. We also proposed a novel multi-tier ITS security framework for validating the usability of SITS with business intelligence development in the enterprise domain.
2020-11-20
Benzekri, A., Laborde, R., Oglaza, A., Rammal, D., Barrere, F..  2019.  Dynamic security management driven by situations: An exploratory analysis of logs for the identification of security situations. 2019 3rd Cyber Security in Networking Conference (CSNet). :66—72.
Situation awareness consists of "the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future". Being aware of the security situation is then mandatory to launch proper security reactions in response to cybersecurity attacks. Security Incident and Event Management solutions are deployed within Security Operation Centers. Some vendors propose machine learning based approaches to detect intrusions by analysing networks behaviours. But cyberattacks like Wannacry and NotPetya, which shut down hundreds of thousands of computers, demonstrated that networks monitoring and surveillance solutions remain insufficient. Detecting these complex attacks (a.k.a. Advanced Persistent Threats) requires security administrators to retain a large number of logs just in case problems are detected and involve the investigation of past security events. This approach generates massive data that have to be analysed at the right time in order to detect any accidental or caused incident. In the same time, security administrators are not yet seasoned to such a task and lack the desired skills in data science. As a consequence, a large amount of data is available and still remains unexplored which leaves number of indicators of compromise under the radar. Building on the concept of situation awareness, we developed a situation-driven framework, called dynSMAUG, for dynamic security management. This approach simplifies the security management of dynamic systems and allows the specification of security policies at a high-level of abstraction (close to security requirements). This invited paper aims at exposing real security situations elicitation, coming from networks security experts, and showing the results of exploratory analysis techniques using complex event processing techniques to identify and extract security situations from a large volume of logs. The results contributed to the extension of the dynSMAUG solution.
Efstathopoulos, G., Grammatikis, P. R., Sarigiannidis, P., Argyriou, V., Sarigiannidis, A., Stamatakis, K., Angelopoulos, M. K., Athanasopoulos, S. K..  2019.  Operational Data Based Intrusion Detection System for Smart Grid. 2019 IEEE 24th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1—6.
With the rapid progression of Information and Communication Technology (ICT) and especially of Internet of Things (IoT), the conventional electrical grid is transformed into a new intelligent paradigm, known as Smart Grid (SG). SG provides significant benefits both for utility companies and energy consumers such as the two-way communication (both electricity and information), distributed generation, remote monitoring, self-healing and pervasive control. However, at the same time, this dependence introduces new security challenges, since SG inherits the vulnerabilities of multiple heterogeneous, co-existing legacy and smart technologies, such as IoT and Industrial Control Systems (ICS). An effective countermeasure against the various cyberthreats in SG is the Intrusion Detection System (IDS), informing the operator timely about the possible cyberattacks and anomalies. In this paper, we provide an anomaly-based IDS especially designed for SG utilising operational data from a real power plant. In particular, many machine learning and deep learning models were deployed, introducing novel parameters and feature representations in a comparative study. The evaluation analysis demonstrated the efficacy of the proposed IDS and the improvement due to the suggested complex data representation.
Prasad, G., Huo, Y., Lampe, L., Leung, V. C. M..  2019.  Machine Learning Based Physical-Layer Intrusion Detection and Location for the Smart Grid. 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1—6.
Security and privacy of smart grid communication data is crucial given the nature of the continuous bidirectional information exchange between the consumer and the utilities. Data security has conventionally been ensured using cryptographic techniques implemented at the upper layers of the network stack. However, it has been shown that security can be further enhanced using physical layer (PHY) methods. To aid and/or complement such PHY and upper layer techniques, in this paper, we propose a PHY design that can detect and locate not only an active intruder but also a passive eavesdropper in the network. Our method can either be used as a stand-alone solution or together with existing techniques to achieve improved smart grid data security. Our machine learning based solution intelligently and automatically detects and locates a possible intruder in the network by reusing power line transmission modems installed in the grid for communication purposes. Simulation results show that our cost-efficient design provides near ideal intruder detection rates and also estimates its location with a high degree of accuracy.
Roy, D. D., Shin, D..  2019.  Network Intrusion Detection in Smart Grids for Imbalanced Attack Types Using Machine Learning Models. 2019 International Conference on Information and Communication Technology Convergence (ICTC). :576—581.
Smart grid has evolved as the next generation power grid paradigm which enables the transfer of real time information between the utility company and the consumer via smart meter and advanced metering infrastructure (AMI). These information facilitate many services for both, such as automatic meter reading, demand side management, and time-of-use (TOU) pricing. However, there have been growing security and privacy concerns over smart grid systems, which are built with both smart and legacy information and operational technologies. Intrusion detection is a critical security service for smart grid systems, alerting the system operator for the presence of ongoing attacks. Hence, there has been lots of research conducted on intrusion detection in the past, especially anomaly-based intrusion detection. Problems emerge when common approaches of pattern recognition are used for imbalanced data which represent much more data instances belonging to normal behaviors than to attack ones, and these approaches cause low detection rates for minority classes. In this paper, we study various machine learning models to overcome this drawback by using CIC-IDS2018 dataset [1].
Goyal, Y., Sharma, A..  2019.  A Semantic Machine Learning Approach for Cyber Security Monitoring. 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). :439—442.
Security refers to precautions designed to shield the availability and integrity of information exchanged among the digital global community. Information safety measure typically protects the virtual facts from unauthorized sources to get a right of entry to, disclosure, manipulation, alteration or destruction on both hardware and software technologies. According to an evaluation through experts operating in the place of information safety, some of the new cyber-attacks are keep on emerging in all the business processes. As a stop result of the analyses done, it's been determined that although the level of risk is not excessive in maximum of the attacks, it's far a severe risk for important data and the severity of those attacks is prolonged. Prior safety structures has been established to monitor various cyber-threats, predominantly using a gadget processed data or alerts for showing each deterministic and stochastic styles. The principal finding for deterministic patterns in cyber- attacks is that they're neither unbiased nor random over the years. Consequently, the quantity of assaults in the past helps to monitor the range of destiny attacks. The deterministic styles can often be leveraged to generate moderately correct monitoring.
2020-11-17
Poltronieri, F., Sadler, L., Benincasa, G., Gregory, T., Harrell, J. M., Metu, S., Moulton, C..  2018.  Enabling Efficient and Interoperable Control of IoBT Devices in a Multi-Force Environment. MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM). :757—762.

Efficient application of Internet of Battlefield Things (IoBT) technology on the battlefield calls for innovative solutions to control and manage the deluge of heterogeneous IoBT devices. This paper presents an innovative paradigm to address heterogeneity in controlling IoBT and IoT devices, enabling multi-force cooperation in challenging battlefield scenarios.

Agadakos, I., Ciocarlie, G. F., Copos, B., George, J., Leslie, N., Michaelis, J..  2019.  Security for Resilient IoBT Systems: Emerging Research Directions. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1—6.

Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, a multitude of operational conditions (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a set of research directions are proposed that aim to fundamentally address the issues of trust and trustworthiness in contested battlefield environments, building on prior research in the cybersecurity domain. These research directions focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) Ensuring continued trust of known IoBT assets and systems.

Agadakos, I., Ciocarlie, G. F., Copos, B., Emmi, M., George, J., Leslie, N., Michaelis, J..  2019.  Application of Trust Assessment Techniques to IoBT Systems. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :833—840.

Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, current operational conditions necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a collection of prior-developed cybersecurity techniques is reviewed for applicability to conditions presented by IoBT operational environments (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) through use of supporting case study examples. The research techniques covered focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) ensuring continued trust of known IoT assets and IoBT systems.

2020-11-16
Su, H., Halak, B., Zwolinski, M..  2019.  Two-Stage Architectures for Resilient Lightweight PUFs. 2019 IEEE 4th International Verification and Security Workshop (IVSW). :19–24.
The following topics are dealt with: Internet of Things; invasive software; security of data; program testing; reverse engineering; product codes; binary codes; decoding; maximum likelihood decoding; field programmable gate arrays.
2020-11-09
Wheelus, C., Bou-Harb, E., Zhu, X..  2018.  Tackling Class Imbalance in Cyber Security Datasets. 2018 IEEE International Conference on Information Reuse and Integration (IRI). :229–232.
It is clear that cyber-attacks are a danger that must be addressed with great resolve, as they threaten the information infrastructure upon which we all depend. Many studies have been published expressing varying levels of success with machine learning approaches to combating cyber-attacks, but many modern studies still focus on training and evaluating with very outdated datasets containing old attacks that are no longer a threat, and also lack data on new attacks. Recent datasets like UNSW-NB15 and SANTA have been produced to address this problem. Even so, these modern datasets suffer from class imbalance, which reduces the efficacy of predictive models trained using these datasets. Herein we evaluate several pre-processing methods for addressing the class imbalance problem; using several of the most popular machine learning algorithms and a variant of UNSW-NB15 based upon the attributes from the SANTA dataset.
2020-11-04
Apruzzese, G., Colajanni, M., Ferretti, L., Marchetti, M..  2019.  Addressing Adversarial Attacks Against Security Systems Based on Machine Learning. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1—18.

Machine-learning solutions are successfully adopted in multiple contexts but the application of these techniques to the cyber security domain is complex and still immature. Among the many open issues that affect security systems based on machine learning, we concentrate on adversarial attacks that aim to affect the detection and prediction capabilities of machine-learning models. We consider realistic types of poisoning and evasion attacks targeting security solutions devoted to malware, spam and network intrusion detection. We explore the possible damages that an attacker can cause to a cyber detector and present some existing and original defensive techniques in the context of intrusion detection systems. This paper contains several performance evaluations that are based on extensive experiments using large traffic datasets. The results highlight that modern adversarial attacks are highly effective against machine-learning classifiers for cyber detection, and that existing solutions require improvements in several directions. The paper paves the way for more robust machine-learning-based techniques that can be integrated into cyber security platforms.

Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M..  2019.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :188—193.

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.

Jin, Y., Tomoishi, M., Matsuura, S..  2019.  A Detection Method Against DNS Cache Poisoning Attacks Using Machine Learning Techniques: Work in Progress. 2019 IEEE 18th International Symposium on Network Computing and Applications (NCA). :1—3.

DNS based domain name resolution has been known as one of the most fundamental Internet services. In the meanwhile, DNS cache poisoning attacks also have become a critical threat in the cyber world. In addition to Kaminsky attacks, the falsified data from the compromised authoritative DNS servers also have become the threats nowadays. Several solutions have been proposed in order to prevent DNS cache poisoning attacks in the literature for the former case such as DNSSEC (DNS Security Extensions), however no effective solutions have been proposed for the later case. Moreover, due to the performance issue and significant workload increase on DNS cache servers, DNSSEC has not been deployed widely yet. In this work, we propose an advanced detection method against DNS cache poisoning attacks using machine learning techniques. In the proposed method, in addition to the basic 5-tuple information of a DNS packet, we intend to add a lot of special features extracted based on the standard DNS protocols as well as the heuristic aspects such as “time related features”, “GeoIP related features” and “trigger of cached DNS data”, etc., in order to identify the DNS response packets used for cache poisoning attacks especially those from compromised authoritative DNS servers. In this paper, as a work in progress, we describe the basic idea and concept of our proposed method as well as the intended network topology of the experimental environment while the prototype implementation, training data preparation and model creation as well as the evaluations will belong to the future work.

2020-11-02
Chong, T., Anu, V., Sultana, K. Z..  2019.  Using Software Metrics for Predicting Vulnerable Code-Components: A Study on Java and Python Open Source Projects. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :98–103.

Software vulnerabilities often remain hidden until an attacker exploits the weak/insecure code. Therefore, testing the software from a vulnerability discovery perspective becomes challenging for developers if they do not inspect their code thoroughly (which is time-consuming). We propose that vulnerability prediction using certain software metrics can support the testing process by identifying vulnerable code-components (e.g., functions, classes, etc.). Once a code-component is predicted as vulnerable, the developers can focus their testing efforts on it, thereby avoiding the time/effort required for testing the entire application. The current paper presents a study that compares how software metrics perform as vulnerability predictors for software projects developed in two different languages (Java vs Python). The goal of this research is to analyze the vulnerability prediction performance of software metrics for different programming languages. We designed and conducted experiments on security vulnerabilities reported for three Java projects (Apache Tomcat 6, Tomcat 7, Apache CXF) and two Python projects (Django and Keystone). In this paper, we focus on a specific type of code component: Functions. We apply Machine Learning models for predicting vulnerable functions. Overall results show that software metrics-based vulnerability prediction is more useful for Java projects than Python projects (i.e., software metrics when used as features were able to predict Java vulnerable functions with a higher recall and precision compared to Python vulnerable functions prediction).

Zhong, J., Yang, C..  2019.  A Compositionality Assembled Model for Learning and Recognizing Emotion from Bodily Expression. 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM). :821–826.
When we are express our internal status, such as emotions, the human body expression we use follows the compositionality principle. It is a theory in linguistic which proposes that the single components of the bodily presentation as well as the rules used to combine them are the major parts to finish this process. In this paper, such principle is applied to the process of expressing and recognizing emotional states through body expression, in which certain key features can be learned to represent certain primitives of the internal emotional state in the form of basic variables. This is done by a hierarchical recurrent neural learning framework (RNN) because of its nonlinear dynamic bifurcation, so that variables can be learned to represent different hierarchies. In addition, we applied some adaptive learning techniques in machine learning for the requirement of real-time emotion recognition, in which a stable representation can be maintained compared to previous work. The model is examined by comparing the PB values between the training and recognition phases. This hierarchical model shows the rationality of the compositionality hypothesis by the RNN learning and explains how key features can be used and combined in bodily expression to show the emotional state.
Zhao, Xinghan, Gao, Xiangfei.  2018.  An AI Software Test Method Based on Scene Deductive Approach. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :14—20.
Artificial intelligence (AI) software has high algorithm complexity, and the scale and dimension of the input and output parameters are high, and the test oracle isn't explicit. These features make a lot of difficulties for the design of test cases. This paper proposes an AI software testing method based on scene deductive approach. It models the input, output parameters and the environment, uses the random algorithm to generate the inputs of the test cases, then use the algorithm of deductive approach to make the software testing automatically, and use the test assertions to verify the results of the test. After description of the theory, this paper uses intelligent tracking car as an example to illustrate the application of this method and the problems needing attention. In the end, the paper describes the shortcoming of this method and the future research directions.