Visible to the public Biblio

Found 194 results

Filters: Keyword is Measurement  [Clear All Filters]
Severson, T., Rodriguez-Seda, E., Kiriakidis, K., Croteau, B., Krishnankutty, D., Robucci, R., Patel, C., Banerjee, N..  2018.  Trust-Based Framework for Resilience to Sensor-Targeted Attacks in Cyber-Physical Systems. 2018 Annual American Control Conference (ACC). :6499-6505.
Networked control systems improve the efficiency of cyber-physical plants both functionally, by the availability of data generated even in far-flung locations, and operationally, by the adoption of standard protocols. A side-effect, however, is that now the safety and stability of a local process and, in turn, of the entire plant are more vulnerable to malicious agents. Leveraging the communication infrastructure, the authors here present the design of networked control systems with built-in resilience. Specifically, the paper addresses attacks known as false data injections that originate within compromised sensors. In the proposed framework for closed-loop control, the feedback signal is constructed by weighted consensus of estimates of the process state gathered from other interconnected processes. Observers are introduced to generate the state estimates from the local data. Side-channel monitors are attached to each primary sensor in order to assess proper code execution. These monitors provide estimates of the trust assigned to each observer output and, more importantly, independent of it; these estimates serve as weights in the consensus algorithm. The authors tested the concept on a multi-sensor networked physical experiment with six primary sensors. The weighted consensus was demonstrated to yield a feedback signal within specified accuracy even if four of the six primary sensors were injecting false data.
Ghugar, U., Pradhan, J..  2018.  NL-IDS: Trust Based Intrusion Detection System for Network Layer in Wireless Sensor Networks. 2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC). :512-516.

From the last few years, security in wireless sensor network (WSN) is essential because WSN application uses important information sharing between the nodes. There are large number of issues raised related to security due to open deployment of network. The attackers disturb the security system by attacking the different protocol layers in WSN. The standard AODV routing protocol faces security issues when the route discovery process takes place. The data should be transmitted in a secure path to the destination. Therefore, to support the process we have proposed a trust based intrusion detection system (NL-IDS) for network layer in WSN to detect the Black hole attackers in the network. The sensor node trust is calculated as per the deviation of key factor at the network layer based on the Black hole attack. We use the watchdog technique where a sensor node continuously monitors the neighbor node by calculating a periodic trust value. Finally, the overall trust value of the sensor node is evaluated by the gathered values of trust metrics of the network layer (past and previous trust values). This NL-IDS scheme is efficient to identify the malicious node with respect to Black hole attack at the network layer. To analyze the performance of NL-IDS, we have simulated the model in MATLAB R2015a, and the result shows that NL-IDS is better than Wang et al. [11] as compare of detection accuracy and false alarm rate.

Mai, H. L., Nguyen, T., Doyen, G., Cogranne, R., Mallouli, W., Oca, E. M. de, Festor, O..  2018.  Towards a security monitoring plane for named data networking and its application against content poisoning attack. NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium. :1–9.

Named Data Networking (NDN) is the most mature proposal of the Information Centric Networking paradigm, a clean-slate approach for the Future Internet. Although NDN was designed to tackle security issues inherent to IP networks natively, newly introduced security attacks in its transitional phase threaten NDN's practical deployment. Therefore, a security monitoring plane for NDN is indispensable before any potential deployment of this novel architecture in an operating context by any provider. We propose an approach for the monitoring and anomaly detection in NDN nodes leveraging Bayesian Network techniques. A list of monitored metrics is introduced as a quantitative measure to feature the behavior of an NDN node. By leveraging the hypothesis testing theory, a micro detector is developed to detect whenever the metric significantly changes from its normal behavior. A Bayesian network structure that correlates alarms from micro detectors is designed based on the expert knowledge of the NDN specification and the NFD implementation. The relevance and performance of our security monitoring approach are demonstrated by considering the Content Poisoning Attack (CPA), one of the most critical attacks in NDN, through numerous experiment data collected from a real NDN deployment.

Kebande, V. R., Kigwana, I., Venter, H. S., Karie, N. M., Wario, R. D..  2018.  CVSS Metric-Based Analysis, Classification and Assessment of Computer Network Threats and Vulnerabilities. 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). :1–10.
This paper provides a Common Vulnerability Scoring System (CVSS) metric-based technique for classifying and analysing the prevailing Computer Network Security Vulnerabilities and Threats (CNSVT). The problem that is addressed in this paper, is that, at the time of writing this paper, there existed no effective approaches for analysing and classifying CNSVT for purposes of assessments based on CVSS metrics. The authors of this paper have achieved this by generating a CVSS metric-based dynamic Vulnerability Analysis Classification Countermeasure (VACC) criterion that is able to rank vulnerabilities. The CVSS metric-based VACC has allowed the computation of vulnerability Similarity Measure (VSM) using the Hamming and Euclidean distance metric functions. Nevertheless, the CVSS-metric based on VACC also enabled the random measuring of the VSM for a selected number of vulnerabilities based on the [Ma-Ma], [Ma-Mi], [Mi-Ci], [Ma-Ci] ranking score. This is a technique that is aimed at allowing security experts to be able to conduct proper vulnerability detection and assessments across computer-based networks based on the perceived occurrence by checking the probability that given threats will occur or not. The authors have also proposed high-level countermeasures of the vulnerabilities that have been listed. The authors have evaluated the CVSS-metric based VACC and the results are promising. Based on this technique, it is worth noting that these propositions can help in the development of stronger computer and network security tools.
Zieger, A., Freiling, F., Kossakowski, K..  2018.  The β-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115–133.
To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose β-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a β-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.
Arabsorkhi, A., Ghaffari, F..  2018.  Security Metrics: Principles and Security Assessment Methods. 2018 9th International Symposium on Telecommunications (IST). :305–310.
Nowadays, Information Technology is one of the important parts of human life and also of organizations. Organizations face problems such as IT problems. To solve these problems, they have to improve their security sections. Thus there is a need for security assessments within organizations to ensure security conditions. The use of security standards and general metric can be useful for measuring the safety of an organization; however, it should be noted that the general metric which are applied to businesses in general cannot be effective in this particular situation. Thus it's important to select metric standards for different businesses to improve both cost and organizational security. The selection of suitable security measures lies in the use of an efficient way to identify them. Due to the numerous complexities of these metric and the extent to which they are defined, in this paper that is based on comparative study and the benchmarking method, taxonomy for security measures is considered to be helpful for a business to choose metric tailored to their needs and conditions.
Plasencia-Balabarca, F., Mitacc-Meza, E., Raffo-Jara, M., Silva-Cárdenas, C..  2018.  Robust Functional Verification Framework Based in UVM Applied to an AES Encryption Module. 2018 New Generation of CAS (NGCAS). :194-197.

This Since the past century, the digital design industry has performed an outstanding role in the development of electronics. Hence, a great variety of designs are developed daily, these designs must be submitted to high standards of verification in order to ensure the 100% of reliability and the achievement of all design requirements. The Universal Verification Methodology (UVM) is the current standard at the industry for the verification process due to its reusability, scalability, time-efficiency and feasibility of handling high-level designs. This research proposes a functional verification framework using UVM for an AES encryption module based on a very detailed and robust verification plan. This document describes the complete verification process as done in the industry for a popular module used in information-security applications in the field of cryptography, defining the basis for future projects. The overall results show the achievement of the high verification standards required in industry applications and highlight the advantages of UVM against System Verilog-based functional verification and direct verification methodologies previously developed for the AES module.

Ghonge, M. M., Jawandhiya, P. M., Thakare, V. M..  2018.  Reputation and trust based selfish node detection system in MANETs. 2018 2nd International Conference on Inventive Systems and Control (ICISC). :661–667.

With the progress over technology, it is becoming viable to set up mobile ad hoc networks for non-military services as like well. Examples consist of networks of cars, law about communication facilities into faraway areas, and exploiting the solidity between urban areas about present nodes such as cellular telephones according to offload or otherwise keep away from using base stations. In such networks, there is no strong motive according to assume as the nodes cooperate. Some nodes may also be disruptive and partial may additionally attempt according to save sources (e.g. battery power, memory, CPU cycles) through “selfish” behavior. The proposed method focuses on the robustness of packet forwarding: keeping the usual packet throughput over a mobile ad hoc network in the rear regarding nodes that misbehave at the routing layer. Proposed system listen at the routing layer or function no longer try after address attacks at lower layers (eg. jamming the network channel) and passive attacks kind of eavesdropping. Moreover such functionate now not bear together with issues kind of node authentication, securing routes, or message encryption. Proposed solution addresses an orthogonal problem the encouragement concerning proper routing participation.

Liu, Y., Li, X., Xiao, L..  2018.  Service Oriented Resilience Strategy for Cloud Data Center. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :269-274.

As an information hinge of various trades and professions in the era of big data, cloud data center bears the responsibility to provide uninterrupted service. To cope with the impact of failure and interruption during the operation on the Quality of Service (QoS), it is important to guarantee the resilience of cloud data center. Thus, different resilience actions are conducted in its life circle, that is, resilience strategy. In order to measure the effect of resilience strategy on the system resilience, this paper propose a new approach to model and evaluate the resilience strategy for cloud data center focusing on its core part of service providing-IT architecture. A comprehensive resilience metric based on resilience loss is put forward considering the characteristic of cloud data center. Furthermore, mapping model between system resilience and resilience strategy is built up. Then, based on a hierarchical colored generalized stochastic petri net (HCGSPN) model depicting the procedure of the system processing the service requests, simulation is conducted to evaluate the resilience strategy through the metric calculation. With a case study of a company's cloud data center, the applicability and correctness of the approach is demonstrated.

Kumar, A., Abdelhadi, A., Clancy, C..  2018.  Novel Anomaly Detection and Classification Schemes for Machine-to-Machine Uplink. 2018 IEEE International Conference on Big Data (Big Data). :1284-1289.

Machine-to-Machine (M2M) networks being connected to the internet at large, inherit all the cyber-vulnerabilities of the standard Information Technology (IT) systems. Since perfect cyber-security and robustness is an idealistic construct, it is worthwhile to design intrusion detection schemes to quickly detect and mitigate the harmful consequences of cyber-attacks. Volumetric anomaly detection have been popularized due to their low-complexity, but they cannot detect low-volume sophisticated attacks and also suffer from high false-alarm rate. To overcome these limitations, feature-based detection schemes have been studied for IT networks. However these schemes cannot be easily adapted to M2M systems due to the fundamental architectural and functional differences between the M2M and IT systems. In this paper, we propose novel feature-based detection schemes for a general M2M uplink to detect Distributed Denial-of-Service (DDoS) attacks, emergency scenarios and terminal device failures. The detection for DDoS attack and emergency scenarios involves building up a database of legitimate M2M connections during a training phase and then flagging the new M2M connections as anomalies during the evaluation phase. To distinguish between DDoS attack and emergency scenarios that yield similar signatures for anomaly detection schemes, we propose a modified Canberra distance metric. It basically measures the similarity or differences in the characteristics of inter-arrival time epochs for any two anomalous streams. We detect device failures by inspecting for the decrease in active M2M connections over a reasonably large time interval. Lastly using Monte-Carlo simulations, we show that the proposed anomaly detection schemes have high detection performance and low-false alarm rate.

Alavizadeh, H., Jang-Jaccard, J., Kim, D. S..  2018.  Evaluation for Combination of Shuffle and Diversity on Moving Target Defense Strategy for Cloud Computing. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :573-578.

Moving Target Defence (MTD) has been recently proposed and is an emerging proactive approach which provides an asynchronous defensive strategies. Unlike traditional security solutions that focused on removing vulnerabilities, MTD makes a system dynamic and unpredictable by continuously changing attack surface to confuse attackers. MTD can be utilized in cloud computing to address the cloud's security-related problems. There are many literature proposing MTD methods in various contexts, but it still lacks approaches to evaluate the effectiveness of proposed MTD method. In this paper, we proposed a combination of Shuffle and Diversity MTD techniques and investigate on the effects of deploying these techniques from two perspectives lying on two groups of security metrics (i) system risk: which is the cloud providers' perspective and (ii) attack cost and return on attack: which are attacker's point of view. Moreover, we utilize a scalable Graphical Security Model (GSM) to enhance the security analysis complexity. Finally, we show that combining MTD techniques can improve both aforementioned two groups of security metrics while individual technique cannot.

Quweider, M., Lei, H., Zhang, L., Khan, F..  2018.  Managing Big Data in Visual Retrieval Systems for DHS Applications: Combining Fourier Descriptors and Metric Space Indexing. 2018 1st International Conference on Data Intelligence and Security (ICDIS). :188-193.

Image retrieval systems have been an active area of research for more than thirty years progressively producing improved algorithms that improve performance metrics, operate in different domains, take advantage of different features extracted from the images to be retrieved, and have different desirable invariance properties. With the ever-growing visual databases of images and videos produced by a myriad of devices comes the challenge of selecting effective features and performing fast retrieval on such databases. In this paper, we incorporate Fourier descriptors (FD) along with a metric-based balanced indexing tree as a viable solution to DHS (Department of Homeland Security) needs to for quick identification and retrieval of weapon images. The FDs allow a simple but effective outline feature representation of an object, while the M-tree provide a dynamic, fast, and balanced search over such features. Motivated by looking for applications of interest to DHS, we have created a basic guns and rifles databases that can be used to identify weapons in images and videos extracted from media sources. Our simulations show excellent performance in both representation and fast retrieval speed.

Ghafoor, K. Z., Kong, L., Sadiq, A. S., Doukha, Z., Shareef, F. M..  2018.  Trust-aware routing protocol for mobile crowdsensing environments. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :82–87.
Link quality, trust management and energy efficiency are considered as main factors that affect the performance and lifetime of Mobile CrowdSensing (MCS). Routing packets toward the sink node can be a daunting task if aforementioned factors are considered. Correspondingly, routing packets by considering only shortest path or residual energy lead to suboptimal data forwarding. To this end, we propose a Fuzzy logic based Routing (FR) solution that incorporates social behaviour of human beings, link quality, and node quality to make the optimal routing decision. FR leverages friendship mechanism for trust management, Signal to Noise Ratio (SNR) to assure good link quality node selection, and residual energy for long lasting sensor lifetime. Extensive simulations show that the FR solution outperforms the existing approaches in terms of network lifetime and packet delivery ratio.
Kawanishi, Y., Nishihara, H., Souma, D., Yoshida, H., Hata, Y..  2018.  A Study on Quantitative Risk Assessment Methods in Security Design for Industrial Control Systems. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :62-69.

In recent years, there has been progress in applying information technology to industrial control systems (ICS), which is expected to make the development cost of control devices and systems lower. On the other hand, the security threats are becoming important problems. In 2017, a command injection issue on a data logger was reported. In this paper, we focus on the risk assessment in security design for data loggers used in industrial control systems. Our aim is to provide a risk assessment method optimized for control devices and systems in such a way that one can prioritize threats more preciously, that would lead work resource (time and budget) can be assigned for more important threats than others. We discuss problems with application of the automotive-security guideline of JASO TP15002 to ICS risk assessment. Consequently, we propose a three-phase risk assessment method with a novel Risk Scoring Systems (RSS) for quantitative risk assessment, RSS-CWSS. The idea behind this method is to apply CWSS scoring systems to RSS by fixing values for some of CWSS metrics, considering what the designers can evaluate during the concept phase. Our case study with ICS employing a data logger clarifies that RSS-CWSS can offer an interesting property that it has better risk-score dispersion than the TP15002-specified RSS.

Gevargizian, J., Kulkarni, P..  2018.  MSRR: Measurement Framework For Remote Attestation. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :748–753.
Measurers are critical to a remote attestation (RA) system to verify the integrity of a remote untrusted host. Run-time measurers in a dynamic RA system sample the dynamic program state of the host to form evidence in order to establish trust by a remote system (appraiser). However, existing run-time measurers are tightly integrated with specific software. Such measurers need to be generated anew for each software, which is a manual process that is both challenging and tedious. In this paper we present a novel approach to decouple application-specific measurement policies from the measurers tasked with performing the actual run-time measurement. We describe MSRR (MeaSeReR), a novel general-purpose measurement framework that is agnostic of the target application. We show how measurement policies written per application can use MSRR, eliminating much time and effort spent on reproducing core measurement functionality. We describe MSRR's robust querying language, which allows the appraiser to accurately specify the what, when, and how to measure. We evaluate MSRR's overhead and demonstrate its functionality.
Shi, T., Shi, W., Wang, C., Wang, Z..  2018.  Compressed Sensing based Intrusion Detection System for Hybrid Wireless Mesh Networks. 2018 International Conference on Computing, Networking and Communications (ICNC). :11–15.
As wireless mesh networks (WMNs) develop rapidly, security issue becomes increasingly important. Intrusion Detection System (IDS) is one of the crucial ways to detect attacks. However, IDS in wireless networks including WMNs brings high detection overhead, which degrades network performance. In this paper, we apply compressed sensing (CS) theory to IDS and propose a CS based IDS for hybrid WMNs. Since CS can reconstruct a sparse signal with compressive sampling, we process the detected data and construct sparse original signals. Through reconstruction algorithm, the compressive sampled data can be reconstructed and used for detecting intrusions, which reduces the detection overhead. We also propose Active State Metric (ASM) as an attack metric for recognizing attacks, which measures the activity in PHY layer and energy consumption of each node. Through intensive simulations, the results show that under 50% attack density, our proposed IDS can ensure 95% detection rate while reducing about 40% detection overhead on average.
Versluis, L., Neacsu, M., Iosup, A..  2018.  A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :223–232.
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.
Pulparambil, S., Baghdadi, Y., Al-Hamdani, A., Al-Badawi, M..  2018.  Service Design Metrics to Predict IT-Based Drivers of Service Oriented Architecture Adoption. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.
The key factors for deploying successful services is centered on the service design practices adopted by an enterprise. The design level information should be validated and measures are required to quantify the structural attributes. The metrics at this stage will support an early discovery of design flaws and help designers to predict the capabilities of service oriented architecture (SOA) adoption. In this work, we take a deeper look at how we can forecast the key SOA capabilities infrastructure efficiency and service reuse from the service designs modeled by SOA modeling language. The proposed approach defines metrics based on the structural and domain level similarity of service operations. The proposed metrics are analytically validated with respect to software engineering metrics properties. Moreover, a tool has been developed to automate the proposed approach and the results indicate that the metrics predict the SOA capabilities at the service design stage. This work can be further extended to predict the business based capabilities of SOA adoption such as flexibility and agility.
Cebe, M., Akkaya, K..  2017.  Efficient Management of Certificate Revocation Lists in Smart Grid Advanced Metering Infrastructure. 2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). :313–317.
Advanced Metering Infrastructure (AMI) forms a communication network for the collection of power data from smart meters in Smart Grid. As the communication within an AMI needs to be secure, key management becomes an issue due to overhead and limited resources. While using public-keys eliminate some of the overhead of key management, there is still challenges regarding certificates that store and certify the public-keys. In particular, distribution and storage of certificate revocation list (CRL) is major a challenge due to cost of distribution and storage in AMI networks which typically consist of wireless multi-hop networks. Motivated by the need of keeping the CRL distribution and storage cost effective and scalable, in this paper, we present a distributed CRL management model utilizing the idea of distributed hash trees (DHTs) from peer-to-peer (P2P) networks. The basic idea is to share the burden of storage of CRLs among all the smart meters by exploiting the meshing capability of the smart meters among each other. Thus, using DHTs not only reduces the space requirements for CRLs but also makes the CRL updates more convenient. We implemented this structure on ns-3 using IEEE 802.11s mesh standard as a model for AMI and demonstrated its superior performance with respect to traditional methods of CRL management through extensive simulations.
Fayyad, S., Noll, J..  2017.  A Framework for Measurability of Security. 2017 8th International Conference on Information and Communication Systems (ICICS). :302–309.

Having an effective security level for Embedded System (ES), helps a reliable and stable operation of this system. In order to identify, if the current security level for a given ES is effective or not, we need a proactive evaluation for this security level. The evaluation of the security level for ESs is not straightforward process, things like the heterogeneity among the components of ES complicate this process. One of the productive approaches, which overcame the complexity of evaluation for Security, Privacy and Dependability (SPD) is the Multi Metrics (MM). As most of SPD evaluation approaches, the MM approach bases on the experts knowledge for the basic evaluation. Regardless of its advantages, experts evaluation has some drawbacks, which foster the need for less experts-dependent evaluation. In this paper, we propose a framework for security measurability as a part of security, privacy and dependability evaluation. The security evaluation based on Multi Metric (MM) approach as being an effective approach for evaluations, thus, we call it MM framework. The art of evaluation investigated within MM framework, based also on systematic storing and retrieving of experts knowledge. Using MM framework, the administrator of the ES could evaluate and enhance the S-level of their system, without being an expert in security.

Turnley, J., Wachtel, A., Muñoz-Ramos, K., Hoffman, M., Gauthier, J., Speed, A., Kittinger, R..  2017.  Modeling human-technology interaction as a sociotechnical system of systems. 2017 12th System of Systems Engineering Conference (SoSE). :1–6.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
Zhang, H., Lou, F., Fu, Y., Tian, Z..  2017.  A Conditional Probability Computation Method for Vulnerability Exploitation Based on CVSS. 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC). :238–241.
Computing the probability of vulnerability exploitation in Bayesian attack graphs (BAGs) is a key process for the network security assessment. The conditional probability of vulnerability exploitation could be obtained from the exploitability of the NIST's Common Vulnerability Scoring System (CVSS). However, the method which N. Poolsappasit et al. proposed for computing conditional probability could be used only in the CVSS metric version v2.0, and can't be used in other two versions. In this paper, we present two methods for computing the conditional probability based on CVSS's other two metric versions, version 1.0 and version 3.0, respectively. Based on the CVSS, the conditional probability computation of vulnerability exploitation is complete by combining the method of N. Poolsappasit et al.
Teusner, R., Matthies, C., Giese, P..  2017.  Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :418–425.
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
Hossain, M. A., Merrill, H. M., Bodson, M..  2017.  Evaluation of metrics of susceptibility to cascading blackouts. 2017 IEEE Power and Energy Conference at Illinois (PECI). :1–5.
In this paper, we evaluate the usefulness of metrics that assess susceptibility to cascading blackouts. The metrics are computed using a matrix of Line Outage Distribution Factors (LODF, or DFAX matrix). The metrics are compared for several base cases with different load levels of the Western Interconnection (WI). A case corresponding to the September 8, 2011 pre-blackout state is used to compute these metrics and relate them to the origin of the cascading blackout. The correlation between the proposed metrics is determined to check redundancy. The analysis is also used to find vulnerable and critical hot spots in the power system.
Takbiri, N., Houmansadr, A., Goeckel, D. L., Pishro-Nik, H..  2017.  Limits of location privacy under anonymization and obfuscation. 2017 IEEE International Symposium on Information Theory (ISIT). :764–768.

The prevalence of mobile devices and location-based services (LBS) has generated great concerns regarding the LBS users' privacy, which can be compromised by statistical analysis of their movement patterns. A number of algorithms have been proposed to protect the privacy of users in such systems, but the fundamental underpinnings of such remain unexplored. Recently, the concept of perfect location privacy was introduced and its achievability was studied for anonymization-based LBS systems, where user identifiers are permuted at regular intervals to prevent identification based on statistical analysis of long time sequences. In this paper, we significantly extend that investigation by incorporating the other major tool commonly employed to obtain location privacy: obfuscation, where user locations are purposely obscured to protect their privacy. Since anonymization and obfuscation reduce user utility in LBS systems, we investigate how location privacy varies with the degree to which each of these two methods is employed. We provide: (1) achievability results for the case where the location of each user is governed by an i.i.d. process; (2) converse results for the i.i.d. case as well as the more general Markov Chain model. We show that, as the number of users in the network grows, the obfuscation-anonymization plane can be divided into two regions: in the first region, all users have perfect location privacy; and, in the second region, no user has location privacy.