Visible to the public Biblio

Found 203 results

Filters: Keyword is Measurement  [Clear All Filters]
2019-11-12
Zhang, Xian, Ben, Kerong, Zeng, Jie.  2018.  Cross-Entropy: A New Metric for Software Defect Prediction. 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). :111-122.

Defect prediction is an active topic in software quality assurance, which can help developers find potential bugs and make better use of resources. To improve prediction performance, this paper introduces cross-entropy, one common measure for natural language, as a new code metric into defect prediction tasks and proposes a framework called DefectLearner for this process. We first build a recurrent neural network language model to learn regularities in source code from software repository. Based on the trained model, the cross-entropy of each component can be calculated. To evaluate the discrimination for defect-proneness, cross-entropy is compared with 20 widely used metrics on 12 open-source projects. The experimental results show that cross-entropy metric is more discriminative than 50% of the traditional metrics. Besides, we combine cross-entropy with traditional metric suites together for accurate defect prediction. With cross-entropy added, the performance of prediction models is improved by an average of 2.8% in F1-score.

Ferenc, Rudolf, Heged\H us, Péter, Gyimesi, Péter, Antal, Gábor, Bán, Dénes, Gyimóthy, Tibor.  2019.  Challenging Machine Learning Algorithms in Predicting Vulnerable JavaScript Functions. 2019 IEEE/ACM 7th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE). :8-14.

The rapid rise of cyber-crime activities and the growing number of devices threatened by them place software security issues in the spotlight. As around 90% of all attacks exploit known types of security issues, finding vulnerable components and applying existing mitigation techniques is a viable practical approach for fighting against cyber-crime. In this paper, we investigate how the state-of-the-art machine learning techniques, including a popular deep learning algorithm, perform in predicting functions with possible security vulnerabilities in JavaScript programs. We applied 8 machine learning algorithms to build prediction models using a new dataset constructed for this research from the vulnerability information in public databases of the Node Security Project and the Snyk platform, and code fixing patches from GitHub. We used static source code metrics as predictors and an extensive grid-search algorithm to find the best performing models. We also examined the effect of various re-sampling strategies to handle the imbalanced nature of the dataset. The best performing algorithm was KNN, which created a model for the prediction of vulnerable functions with an F-measure of 0.76 (0.91 precision and 0.66 recall). Moreover, deep learning, tree and forest based classifiers, and SVM were competitive with F-measures over 0.70. Although the F-measures did not vary significantly with the re-sampling strategies, the distribution of precision and recall did change. No re-sampling seemed to produce models preferring high precision, while re-sampling strategies balanced the IR measures.

2019-10-30
Jansen, Rob, Traudt, Matthew, Hopper, Nicholas.  2018.  Privacy-Preserving Dynamic Learning of Tor Network Traffic. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1944-1961.

Experimentation tools facilitate exploration of Tor performance and security research problems and allow researchers to safely and privately conduct Tor experiments without risking harm to real Tor users. However, researchers using these tools configure them to generate network traffic based on simplifying assumptions and outdated measurements and without understanding the efficacy of their configuration choices. In this work, we design a novel technique for dynamically learning Tor network traffic models using hidden Markov modeling and privacy-preserving measurement techniques. We conduct a safe but detailed measurement study of Tor using 17 relays (\textasciitilde2% of Tor bandwidth) over the course of 6 months, measuring general statistics and models that can be used to generate a sequence of streams and packets. We show how our measurement results and traffic models can be used to generate traffic flows in private Tor networks and how our models are more realistic than standard and alternative network traffic generation\textasciitildemethods.

2019-10-23
Alshawish, Ali, Spielvogel, Korbinian, de Meer, Hermann.  2019.  A Model-Based Time-to-Compromise Estimator to Assess the Security Posture of Vulnerable Networks. 2019 International Conference on Networked Systems (NetSys). :1-3.

Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.

Zieger, Andrej, Freiling, Felix, Kossakowski, Klaus-Peter.  2018.  The $\beta$-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115-133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

2019-10-15
Coleman, M. S., Doody, D. P., Shields, M. A..  2018.  Machine Learning for Real-Time Data-Driven Security Practices. 2018 29th Irish Signals and Systems Conference (ISSC). :1–6.

The risk of cyber-attacks exploiting vulnerable organisations has increased significantly over the past several years. These attacks may combine to exploit a vulnerability breach within a system's protection strategy, which has the potential for loss, damage or destruction of assets. Consequently, every vulnerability has an accompanying risk, which is defined as the "intersection of assets, threats, and vulnerabilities" [1]. This research project aims to experimentally compare the similarity-based ranking of cyber security information utilising a recommendation environment. The Memory-Based Collaborative Filtering technique was employed, specifically the User-Based and Item-Based approaches. These systems utilised information from the National Vulnerability Database, specifically for the identification and similarity-based ranking of cyber-security vulnerability information, relating to hardware and software applications. Experiments were performed using the Item-Based technique, to identify the optimum system parameters, evaluated through the AUC evaluation metric. Once identified, the Item-Based technique was compared with the User-Based technique which utilised the parameters identified from the previous experiments. During these experiments, the Pearson's Correlation Coefficient and the Cosine similarity measure was used. From these experiments, it was identified that utilised the Item-Based technique which employed the Cosine similarity measure, an AUC evaluation metric of 0.80225 was achieved.

2019-09-30
Hohlfeld, J., Czoschke, P., Asselin, P., Benakli, M..  2019.  Improving Our Understanding of Measured Jitter (in HAMR). IEEE Transactions on Magnetics. 55:1–11.

The understanding of measured jitter is improved in three ways. First, it is shown that the measured jitter is not only governed by written-in jitter and the reader resolution along the cross-track direction but by remanence noise in the vicinity of transitions and the down-track reader resolution as well. Second, a novel data analysis scheme is introduced that allows for an unambiguous separation of these two contributions. Third, based on data analyses involving the first two learnings and micro-magnetic simulations, we identify and explain the root causes for variations of jitter with write current (WC) (write field), WC overshoot amplitude (write-field rise time), and linear disk velocity measured for heat-assisted magnetic recording.

2019-09-26
Torkura, K. A., Sukmana, M. I. H., Meinig, M., Cheng, F., Meinel, C., Graupner, H..  2018.  A Threat Modeling Approach for Cloud Storage Brokerage and File Sharing Systems. NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium. :1-5.
Cloud storage brokerage systems abstract cloud storage complexities by mediating technical and business relationships between cloud stakeholders, while providing value-added services. This however raises security challenges pertaining to the integration of disparate components with sometimes conflicting security policies and architectural complexities. Assessing the security risks of these challenges is therefore important for Cloud Storage Brokers (CSBs). In this paper, we present a threat modeling schema to analyze and identify threats and risks in cloud brokerage brokerage systems. Our threat modeling schema works by generating attack trees, attack graphs, and data flow diagrams that represent the interconnections between identified security risks. Our proof-of-concept implementation employs the Common Configuration Scoring System (CCSS) to support the threat modeling schema, since current schemes lack sufficient security metrics which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two attacks commonly launched against cloud storage systems: Cloud sStorage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then combined with CVSS based metrics to assign probabilities in an Attack Tree. Thus, we show the possibility combining CVSS and CCSS for comprehensive threat modeling, and also show that our schemas can be used to improve cloud security.
Miletić, M., Vuku\v sić, M., Mau\v sa, G., Grbac, T. G..  2018.  Cross-Release Code Churn Impact on Effort-Aware Software Defect Prediction. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1460-1466.

Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.

2019-08-05
Severson, T., Rodriguez-Seda, E., Kiriakidis, K., Croteau, B., Krishnankutty, D., Robucci, R., Patel, C., Banerjee, N..  2018.  Trust-Based Framework for Resilience to Sensor-Targeted Attacks in Cyber-Physical Systems. 2018 Annual American Control Conference (ACC). :6499-6505.

Networked control systems improve the efficiency of cyber-physical plants both functionally, by the availability of data generated even in far-flung locations, and operationally, by the adoption of standard protocols. A side-effect, however, is that now the safety and stability of a local process and, in turn, of the entire plant are more vulnerable to malicious agents. Leveraging the communication infrastructure, the authors here present the design of networked control systems with built-in resilience. Specifically, the paper addresses attacks known as false data injections that originate within compromised sensors. In the proposed framework for closed-loop control, the feedback signal is constructed by weighted consensus of estimates of the process state gathered from other interconnected processes. Observers are introduced to generate the state estimates from the local data. Side-channel monitors are attached to each primary sensor in order to assess proper code execution. These monitors provide estimates of the trust assigned to each observer output and, more importantly, independent of it; these estimates serve as weights in the consensus algorithm. The authors tested the concept on a multi-sensor networked physical experiment with six primary sensors. The weighted consensus was demonstrated to yield a feedback signal within specified accuracy even if four of the six primary sensors were injecting false data.

Ghugar, U., Pradhan, J..  2018.  NL-IDS: Trust Based Intrusion Detection System for Network Layer in Wireless Sensor Networks. 2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC). :512-516.

From the last few years, security in wireless sensor network (WSN) is essential because WSN application uses important information sharing between the nodes. There are large number of issues raised related to security due to open deployment of network. The attackers disturb the security system by attacking the different protocol layers in WSN. The standard AODV routing protocol faces security issues when the route discovery process takes place. The data should be transmitted in a secure path to the destination. Therefore, to support the process we have proposed a trust based intrusion detection system (NL-IDS) for network layer in WSN to detect the Black hole attackers in the network. The sensor node trust is calculated as per the deviation of key factor at the network layer based on the Black hole attack. We use the watchdog technique where a sensor node continuously monitors the neighbor node by calculating a periodic trust value. Finally, the overall trust value of the sensor node is evaluated by the gathered values of trust metrics of the network layer (past and previous trust values). This NL-IDS scheme is efficient to identify the malicious node with respect to Black hole attack at the network layer. To analyze the performance of NL-IDS, we have simulated the model in MATLAB R2015a, and the result shows that NL-IDS is better than Wang et al. [11] as compare of detection accuracy and false alarm rate.

Mai, H. L., Nguyen, T., Doyen, G., Cogranne, R., Mallouli, W., Oca, E. M. de, Festor, O..  2018.  Towards a security monitoring plane for named data networking and its application against content poisoning attack. NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium. :1–9.

Named Data Networking (NDN) is the most mature proposal of the Information Centric Networking paradigm, a clean-slate approach for the Future Internet. Although NDN was designed to tackle security issues inherent to IP networks natively, newly introduced security attacks in its transitional phase threaten NDN's practical deployment. Therefore, a security monitoring plane for NDN is indispensable before any potential deployment of this novel architecture in an operating context by any provider. We propose an approach for the monitoring and anomaly detection in NDN nodes leveraging Bayesian Network techniques. A list of monitored metrics is introduced as a quantitative measure to feature the behavior of an NDN node. By leveraging the hypothesis testing theory, a micro detector is developed to detect whenever the metric significantly changes from its normal behavior. A Bayesian network structure that correlates alarms from micro detectors is designed based on the expert knowledge of the NDN specification and the NFD implementation. The relevance and performance of our security monitoring approach are demonstrated by considering the Content Poisoning Attack (CPA), one of the most critical attacks in NDN, through numerous experiment data collected from a real NDN deployment.

2019-07-01
Kebande, V. R., Kigwana, I., Venter, H. S., Karie, N. M., Wario, R. D..  2018.  CVSS Metric-Based Analysis, Classification and Assessment of Computer Network Threats and Vulnerabilities. 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). :1–10.

This paper provides a Common Vulnerability Scoring System (CVSS) metric-based technique for classifying and analysing the prevailing Computer Network Security Vulnerabilities and Threats (CNSVT). The problem that is addressed in this paper, is that, at the time of writing this paper, there existed no effective approaches for analysing and classifying CNSVT for purposes of assessments based on CVSS metrics. The authors of this paper have achieved this by generating a CVSS metric-based dynamic Vulnerability Analysis Classification Countermeasure (VACC) criterion that is able to rank vulnerabilities. The CVSS metric-based VACC has allowed the computation of vulnerability Similarity Measure (VSM) using the Hamming and Euclidean distance metric functions. Nevertheless, the CVSS-metric based on VACC also enabled the random measuring of the VSM for a selected number of vulnerabilities based on the [Ma-Ma], [Ma-Mi], [Mi-Ci], [Ma-Ci] ranking score. This is a technique that is aimed at allowing security experts to be able to conduct proper vulnerability detection and assessments across computer-based networks based on the perceived occurrence by checking the probability that given threats will occur or not. The authors have also proposed high-level countermeasures of the vulnerabilities that have been listed. The authors have evaluated the CVSS-metric based VACC and the results are promising. Based on this technique, it is worth noting that these propositions can help in the development of stronger computer and network security tools.

Zieger, A., Freiling, F., Kossakowski, K..  2018.  The β-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115–133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose β-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a β-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

Arabsorkhi, A., Ghaffari, F..  2018.  Security Metrics: Principles and Security Assessment Methods. 2018 9th International Symposium on Telecommunications (IST). :305–310.

Nowadays, Information Technology is one of the important parts of human life and also of organizations. Organizations face problems such as IT problems. To solve these problems, they have to improve their security sections. Thus there is a need for security assessments within organizations to ensure security conditions. The use of security standards and general metric can be useful for measuring the safety of an organization; however, it should be noted that the general metric which are applied to businesses in general cannot be effective in this particular situation. Thus it's important to select metric standards for different businesses to improve both cost and organizational security. The selection of suitable security measures lies in the use of an efficient way to identify them. Due to the numerous complexities of these metric and the extent to which they are defined, in this paper that is based on comparative study and the benchmarking method, taxonomy for security measures is considered to be helpful for a business to choose metric tailored to their needs and conditions.

2019-06-28
Plasencia-Balabarca, F., Mitacc-Meza, E., Raffo-Jara, M., Silva-Cárdenas, C..  2018.  Robust Functional Verification Framework Based in UVM Applied to an AES Encryption Module. 2018 New Generation of CAS (NGCAS). :194-197.

This Since the past century, the digital design industry has performed an outstanding role in the development of electronics. Hence, a great variety of designs are developed daily, these designs must be submitted to high standards of verification in order to ensure the 100% of reliability and the achievement of all design requirements. The Universal Verification Methodology (UVM) is the current standard at the industry for the verification process due to its reusability, scalability, time-efficiency and feasibility of handling high-level designs. This research proposes a functional verification framework using UVM for an AES encryption module based on a very detailed and robust verification plan. This document describes the complete verification process as done in the industry for a popular module used in information-security applications in the field of cryptography, defining the basis for future projects. The overall results show the achievement of the high verification standards required in industry applications and highlight the advantages of UVM against System Verilog-based functional verification and direct verification methodologies previously developed for the AES module.

2019-06-10
Ghonge, M. M., Jawandhiya, P. M., Thakare, V. M..  2018.  Reputation and trust based selfish node detection system in MANETs. 2018 2nd International Conference on Inventive Systems and Control (ICISC). :661–667.

With the progress over technology, it is becoming viable to set up mobile ad hoc networks for non-military services as like well. Examples consist of networks of cars, law about communication facilities into faraway areas, and exploiting the solidity between urban areas about present nodes such as cellular telephones according to offload or otherwise keep away from using base stations. In such networks, there is no strong motive according to assume as the nodes cooperate. Some nodes may also be disruptive and partial may additionally attempt according to save sources (e.g. battery power, memory, CPU cycles) through “selfish” behavior. The proposed method focuses on the robustness of packet forwarding: keeping the usual packet throughput over a mobile ad hoc network in the rear regarding nodes that misbehave at the routing layer. Proposed system listen at the routing layer or function no longer try after address attacks at lower layers (eg. jamming the network channel) and passive attacks kind of eavesdropping. Moreover such functionate now not bear together with issues kind of node authentication, securing routes, or message encryption. Proposed solution addresses an orthogonal problem the encouragement concerning proper routing participation.

2019-03-22
Liu, Y., Li, X., Xiao, L..  2018.  Service Oriented Resilience Strategy for Cloud Data Center. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :269-274.

As an information hinge of various trades and professions in the era of big data, cloud data center bears the responsibility to provide uninterrupted service. To cope with the impact of failure and interruption during the operation on the Quality of Service (QoS), it is important to guarantee the resilience of cloud data center. Thus, different resilience actions are conducted in its life circle, that is, resilience strategy. In order to measure the effect of resilience strategy on the system resilience, this paper propose a new approach to model and evaluate the resilience strategy for cloud data center focusing on its core part of service providing-IT architecture. A comprehensive resilience metric based on resilience loss is put forward considering the characteristic of cloud data center. Furthermore, mapping model between system resilience and resilience strategy is built up. Then, based on a hierarchical colored generalized stochastic petri net (HCGSPN) model depicting the procedure of the system processing the service requests, simulation is conducted to evaluate the resilience strategy through the metric calculation. With a case study of a company's cloud data center, the applicability and correctness of the approach is demonstrated.

Kumar, A., Abdelhadi, A., Clancy, C..  2018.  Novel Anomaly Detection and Classification Schemes for Machine-to-Machine Uplink. 2018 IEEE International Conference on Big Data (Big Data). :1284-1289.

Machine-to-Machine (M2M) networks being connected to the internet at large, inherit all the cyber-vulnerabilities of the standard Information Technology (IT) systems. Since perfect cyber-security and robustness is an idealistic construct, it is worthwhile to design intrusion detection schemes to quickly detect and mitigate the harmful consequences of cyber-attacks. Volumetric anomaly detection have been popularized due to their low-complexity, but they cannot detect low-volume sophisticated attacks and also suffer from high false-alarm rate. To overcome these limitations, feature-based detection schemes have been studied for IT networks. However these schemes cannot be easily adapted to M2M systems due to the fundamental architectural and functional differences between the M2M and IT systems. In this paper, we propose novel feature-based detection schemes for a general M2M uplink to detect Distributed Denial-of-Service (DDoS) attacks, emergency scenarios and terminal device failures. The detection for DDoS attack and emergency scenarios involves building up a database of legitimate M2M connections during a training phase and then flagging the new M2M connections as anomalies during the evaluation phase. To distinguish between DDoS attack and emergency scenarios that yield similar signatures for anomaly detection schemes, we propose a modified Canberra distance metric. It basically measures the similarity or differences in the characteristics of inter-arrival time epochs for any two anomalous streams. We detect device failures by inspecting for the decrease in active M2M connections over a reasonably large time interval. Lastly using Monte-Carlo simulations, we show that the proposed anomaly detection schemes have high detection performance and low-false alarm rate.

Alavizadeh, H., Jang-Jaccard, J., Kim, D. S..  2018.  Evaluation for Combination of Shuffle and Diversity on Moving Target Defense Strategy for Cloud Computing. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :573-578.

Moving Target Defence (MTD) has been recently proposed and is an emerging proactive approach which provides an asynchronous defensive strategies. Unlike traditional security solutions that focused on removing vulnerabilities, MTD makes a system dynamic and unpredictable by continuously changing attack surface to confuse attackers. MTD can be utilized in cloud computing to address the cloud's security-related problems. There are many literature proposing MTD methods in various contexts, but it still lacks approaches to evaluate the effectiveness of proposed MTD method. In this paper, we proposed a combination of Shuffle and Diversity MTD techniques and investigate on the effects of deploying these techniques from two perspectives lying on two groups of security metrics (i) system risk: which is the cloud providers' perspective and (ii) attack cost and return on attack: which are attacker's point of view. Moreover, we utilize a scalable Graphical Security Model (GSM) to enhance the security analysis complexity. Finally, we show that combining MTD techniques can improve both aforementioned two groups of security metrics while individual technique cannot.

Quweider, M., Lei, H., Zhang, L., Khan, F..  2018.  Managing Big Data in Visual Retrieval Systems for DHS Applications: Combining Fourier Descriptors and Metric Space Indexing. 2018 1st International Conference on Data Intelligence and Security (ICDIS). :188-193.

Image retrieval systems have been an active area of research for more than thirty years progressively producing improved algorithms that improve performance metrics, operate in different domains, take advantage of different features extracted from the images to be retrieved, and have different desirable invariance properties. With the ever-growing visual databases of images and videos produced by a myriad of devices comes the challenge of selecting effective features and performing fast retrieval on such databases. In this paper, we incorporate Fourier descriptors (FD) along with a metric-based balanced indexing tree as a viable solution to DHS (Department of Homeland Security) needs to for quick identification and retrieval of weapon images. The FDs allow a simple but effective outline feature representation of an object, while the M-tree provide a dynamic, fast, and balanced search over such features. Motivated by looking for applications of interest to DHS, we have created a basic guns and rifles databases that can be used to identify weapons in images and videos extracted from media sources. Our simulations show excellent performance in both representation and fast retrieval speed.

2019-03-11
Ghafoor, K. Z., Kong, L., Sadiq, A. S., Doukha, Z., Shareef, F. M..  2018.  Trust-aware routing protocol for mobile crowdsensing environments. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :82–87.
Link quality, trust management and energy efficiency are considered as main factors that affect the performance and lifetime of Mobile CrowdSensing (MCS). Routing packets toward the sink node can be a daunting task if aforementioned factors are considered. Correspondingly, routing packets by considering only shortest path or residual energy lead to suboptimal data forwarding. To this end, we propose a Fuzzy logic based Routing (FR) solution that incorporates social behaviour of human beings, link quality, and node quality to make the optimal routing decision. FR leverages friendship mechanism for trust management, Signal to Noise Ratio (SNR) to assure good link quality node selection, and residual energy for long lasting sensor lifetime. Extensive simulations show that the FR solution outperforms the existing approaches in terms of network lifetime and packet delivery ratio.
2019-03-06
Kawanishi, Y., Nishihara, H., Souma, D., Yoshida, H., Hata, Y..  2018.  A Study on Quantitative Risk Assessment Methods in Security Design for Industrial Control Systems. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :62-69.

In recent years, there has been progress in applying information technology to industrial control systems (ICS), which is expected to make the development cost of control devices and systems lower. On the other hand, the security threats are becoming important problems. In 2017, a command injection issue on a data logger was reported. In this paper, we focus on the risk assessment in security design for data loggers used in industrial control systems. Our aim is to provide a risk assessment method optimized for control devices and systems in such a way that one can prioritize threats more preciously, that would lead work resource (time and budget) can be assigned for more important threats than others. We discuss problems with application of the automotive-security guideline of JASO TP15002 to ICS risk assessment. Consequently, we propose a three-phase risk assessment method with a novel Risk Scoring Systems (RSS) for quantitative risk assessment, RSS-CWSS. The idea behind this method is to apply CWSS scoring systems to RSS by fixing values for some of CWSS metrics, considering what the designers can evaluate during the concept phase. Our case study with ICS employing a data logger clarifies that RSS-CWSS can offer an interesting property that it has better risk-score dispersion than the TP15002-specified RSS.

2019-02-13
Gevargizian, J., Kulkarni, P..  2018.  MSRR: Measurement Framework For Remote Attestation. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :748–753.
Measurers are critical to a remote attestation (RA) system to verify the integrity of a remote untrusted host. Run-time measurers in a dynamic RA system sample the dynamic program state of the host to form evidence in order to establish trust by a remote system (appraiser). However, existing run-time measurers are tightly integrated with specific software. Such measurers need to be generated anew for each software, which is a manual process that is both challenging and tedious. In this paper we present a novel approach to decouple application-specific measurement policies from the measurers tasked with performing the actual run-time measurement. We describe MSRR (MeaSeReR), a novel general-purpose measurement framework that is agnostic of the target application. We show how measurement policies written per application can use MSRR, eliminating much time and effort spent on reproducing core measurement functionality. We describe MSRR's robust querying language, which allows the appraiser to accurately specify the what, when, and how to measure. We evaluate MSRR's overhead and demonstrate its functionality.
2019-01-16
Shi, T., Shi, W., Wang, C., Wang, Z..  2018.  Compressed Sensing based Intrusion Detection System for Hybrid Wireless Mesh Networks. 2018 International Conference on Computing, Networking and Communications (ICNC). :11–15.
As wireless mesh networks (WMNs) develop rapidly, security issue becomes increasingly important. Intrusion Detection System (IDS) is one of the crucial ways to detect attacks. However, IDS in wireless networks including WMNs brings high detection overhead, which degrades network performance. In this paper, we apply compressed sensing (CS) theory to IDS and propose a CS based IDS for hybrid WMNs. Since CS can reconstruct a sparse signal with compressive sampling, we process the detected data and construct sparse original signals. Through reconstruction algorithm, the compressive sampled data can be reconstructed and used for detecting intrusions, which reduces the detection overhead. We also propose Active State Metric (ASM) as an attack metric for recognizing attacks, which measures the activity in PHY layer and energy consumption of each node. Through intensive simulations, the results show that under 50% attack density, our proposed IDS can ensure 95% detection rate while reducing about 40% detection overhead on average.