Visible to the public Biblio

Filters: Keyword is security metrics  [Clear All Filters]
2019-07-01
Arabsorkhi, A., Ghaffari, F..  2018.  Security Metrics: Principles and Security Assessment Methods. 2018 9th International Symposium on Telecommunications (IST). :305–310.

Nowadays, Information Technology is one of the important parts of human life and also of organizations. Organizations face problems such as IT problems. To solve these problems, they have to improve their security sections. Thus there is a need for security assessments within organizations to ensure security conditions. The use of security standards and general metric can be useful for measuring the safety of an organization; however, it should be noted that the general metric which are applied to businesses in general cannot be effective in this particular situation. Thus it's important to select metric standards for different businesses to improve both cost and organizational security. The selection of suitable security measures lies in the use of an efficient way to identify them. Due to the numerous complexities of these metric and the extent to which they are defined, in this paper that is based on comparative study and the benchmarking method, taxonomy for security measures is considered to be helpful for a business to choose metric tailored to their needs and conditions.

2019-05-01
Chen, Huashan, Cho, Jin-Hee, Xu, Shouhuai.  2018.  Quantifying the Security Effectiveness of Firewalls and DMZs. Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. :9:1–9:11.

Firewalls and Demilitarized Zones (DMZs) are two mechanisms that have been widely employed to secure enterprise networks. Despite this, their security effectiveness has not been systematically quantified. In this paper, we make a first step towards filling this void by presenting a representational framework for investigating their security effectiveness in protecting enterprise networks. Through simulation experiments, we draw useful insights into the security effectiveness of firewalls and DMZs. To the best of our knowledge, these insights were not reported in the literature until now.

2019-03-28
Llopis, S., Hingant, J., Pérez, I., Esteve, M., Carvajal, F., Mees, W., Debatty, T..  2018.  A Comparative Analysis of Visualisation Techniques to Achieve Cyber Situational Awareness in the Military. 2018 International Conference on Military Communications and Information Systems (ICMCIS). :1-7.
Starting from a common fictional scenario, simulated data sources and a set of measurements will feed two different visualization techniques with the aim to make a comparative analysis. Both visualization techniques described in this paper use the operational picture concept, deemed as the most appropriate tool for military commanders and their staff to achieve cyber situational awareness and to understand the cyber defence implications in operations. Cyber Common Operational Picture (CyCOP) is a tool developed by Universitat Politècnica de València in collaboration with the Spanish Ministry of Defence whose objective is to generate the Cyber Hybrid Situational Awareness (CyHSA). Royal Military Academy in Belgium developed a 3D Operational Picture able to display mission critical elements intuitively using a priori defined domain-knowledge. A comparative analysis will assist researchers in their way to progress solutions and implementation aspects.
2019-03-22
Alavizadeh, H., Jang-Jaccard, J., Kim, D. S..  2018.  Evaluation for Combination of Shuffle and Diversity on Moving Target Defense Strategy for Cloud Computing. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :573-578.

Moving Target Defence (MTD) has been recently proposed and is an emerging proactive approach which provides an asynchronous defensive strategies. Unlike traditional security solutions that focused on removing vulnerabilities, MTD makes a system dynamic and unpredictable by continuously changing attack surface to confuse attackers. MTD can be utilized in cloud computing to address the cloud's security-related problems. There are many literature proposing MTD methods in various contexts, but it still lacks approaches to evaluate the effectiveness of proposed MTD method. In this paper, we proposed a combination of Shuffle and Diversity MTD techniques and investigate on the effects of deploying these techniques from two perspectives lying on two groups of security metrics (i) system risk: which is the cloud providers' perspective and (ii) attack cost and return on attack: which are attacker's point of view. Moreover, we utilize a scalable Graphical Security Model (GSM) to enhance the security analysis complexity. Finally, we show that combining MTD techniques can improve both aforementioned two groups of security metrics while individual technique cannot.

2019-02-08
Enoch, Simon Yusuf, Hong, Jin B., Ge, Mengmeng, Alzaid, Hani, Kim, Dong Seong.  2018.  Automated Security Investment Analysis of Dynamic Networks. Proceedings of the Australasian Computer Science Week Multiconference. :6:1-6:10.
It is important to assess the cost benefits of IT security investments. Typically, this is done by manual risk assessment process. In this paper, we propose an approach to automate this using graphical security models (GSMs). GSMs have been used to assess the security of networked systems using various security metrics. Most of the existing GSMs assumed that networks are static, however, modern networks (e.g., Cloud and Software Defined Networking) are dynamic with changes. Thus, it is important to develop an approach that takes into account the dynamic aspects of networks. To this end, we automate security investments analysis of dynamic networks using a GSM named Temporal-Hierarchical Attack Representation Model (T-HARM) in order to automatically evaluate the security investments and their effectiveness for a given period of time. We demonstrate our approach via simulations.
Mukherjee, Preetam, Mazumdar, Chandan.  2018.  Attack Difficulty Metric for Assessment of Network Security. Proceedings of the 13th International Conference on Availability, Reliability and Security. :44:1-44:10.
In recent days, organizational networks are becoming target of sophisticated multi-hop attacks. Attack Graph has been proposed as a useful modeling tool for complex attack scenarios by combining multiple vulnerabilities in causal chains. Analysis of attack scenarios enables security administrators to calculate quantitative security measurements. These measurements justify security investments in the organization. Different security metrics based on attack graph have been introduced for evaluation of comparable security measurements. Studies show that difficulty of exploiting the same vulnerability changes with change of its position in the causal chains of attack graph. In this paper, a new security metric based on attack graph, namely Attack Difficulty has been proposed to include this position factor. The security metrics are classified in two major categories viz. counting metrics and difficulty-based metrics. The proposed Attack Difficulty Metric employs both categories of metrics as the basis for its measurement. Case studies have been presented for demonstrating applicability of the proposed metric. Comparison of this new metric with other attack graph based security metrics has also been included to validate its acceptance in real life situations.
2018-11-19
Culler, M., Davis, K..  2018.  Toward a Sensor Trustworthiness Measure for Grid-Connected IoT-Enabled Smart Cities. 2018 IEEE Green Technologies Conference (GreenTech). :168–171.

Traditional security measures for large-scale critical infrastructure systems have focused on keeping adversaries out of the system. As the Internet of Things (IoT) extends into millions of homes, with tens or hundreds of devices each, the threat landscape is complicated. IoT devices have unknown access capabilities with unknown reach into other systems. This paper presents ongoing work on how techniques in sensor verification and cyber-physical modeling and analysis on bulk power systems can be applied to identify malevolent IoT devices and secure smart and connected communities against the most impactful threats.

2018-09-05
Pasareanu, C..  2017.  Symbolic execution and probabilistic reasoning. 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). :1–1.
Summary form only given. Symbolic execution is a systematic program analysis technique which explores multiple program behaviors all at once by collecting and solving symbolic path conditions over program paths. The technique has been recently extended with probabilistic reasoning. This approach computes the conditions to reach target program events of interest and uses model counting to quantify the fraction of the input domain satisfying these conditions thus computing the probability of event occurrence. This probabilistic information can be used for example to compute the reliability of an aircraft controller under different wind conditions (modeled probabilistically) or to quantify the leakage of sensitive data in a software system, using information theory metrics such as Shannon entropy. In this talk we review recent advances in symbolic execution and probabilistic reasoning and we discuss how they can be used to ensure the safety and security of software systems.
Turnley, J., Wachtel, A., Muñoz-Ramos, K., Hoffman, M., Gauthier, J., Speed, A., Kittinger, R..  2017.  Modeling human-technology interaction as a sociotechnical system of systems. 2017 12th System of Systems Engineering Conference (SoSE). :1–6.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
Zhang, H., Lou, F., Fu, Y., Tian, Z..  2017.  A Conditional Probability Computation Method for Vulnerability Exploitation Based on CVSS. 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC). :238–241.
Computing the probability of vulnerability exploitation in Bayesian attack graphs (BAGs) is a key process for the network security assessment. The conditional probability of vulnerability exploitation could be obtained from the exploitability of the NIST's Common Vulnerability Scoring System (CVSS). However, the method which N. Poolsappasit et al. proposed for computing conditional probability could be used only in the CVSS metric version v2.0, and can't be used in other two versions. In this paper, we present two methods for computing the conditional probability based on CVSS's other two metric versions, version 1.0 and version 3.0, respectively. Based on the CVSS, the conditional probability computation of vulnerability exploitation is complete by combining the method of N. Poolsappasit et al.
Teusner, R., Matthies, C., Giese, P..  2017.  Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :418–425.
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
Wang, J., Shi, D., Li, Y., Chen, J., Duan, X..  2017.  Realistic measurement protection schemes against false data injection attacks on state estimators. 2017 IEEE Power Energy Society General Meeting. :1–5.
False data injection attacks (FDIA) on state estimators are a kind of imminent cyber-physical security issue. Fortunately, it has been proved that if a set of measurements is strategically selected and protected, no FDIA will remain undetectable. In this paper, the metric Return on Investment (ROI) is introduced to evaluate the overall returns of the alternative measurement protection schemes (MPS). By setting maximum total ROI as the optimization objective, the previously ignored cost-benefit issue is taken into account to derive a realistic MPS for power utilities. The optimization problem is transformed into the Steiner tree problem in graph theory, where a tree pruning based algorithm is used to reduce the computational complexity and find a quasi-optimal solution with acceptable approximations. The correctness and efficiency of the algorithm are verified by case studies.
Hossain, M. A., Merrill, H. M., Bodson, M..  2017.  Evaluation of metrics of susceptibility to cascading blackouts. 2017 IEEE Power and Energy Conference at Illinois (PECI). :1–5.
In this paper, we evaluate the usefulness of metrics that assess susceptibility to cascading blackouts. The metrics are computed using a matrix of Line Outage Distribution Factors (LODF, or DFAX matrix). The metrics are compared for several base cases with different load levels of the Western Interconnection (WI). A case corresponding to the September 8, 2011 pre-blackout state is used to compute these metrics and relate them to the origin of the cascading blackout. The correlation between the proposed metrics is determined to check redundancy. The analysis is also used to find vulnerable and critical hot spots in the power system.
Doynikova, E., Kotenko, I..  2017.  Enhancement of probabilistic attack graphs for accurate cyber security monitoring. 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1–6.
Timely and adequate response on the computer security incidents depends on the accurate monitoring of the security situation. The paper investigates the task of refinement of the attack models in the form of attack graphs. It considers some challenges of attack graph generation and possible solutions, including: inaccuracies in specifying the pre- and postconditions of attack actions, processing of cycles in graphs to apply the Bayesian methods for attack graph analysis, mapping of incidents on attack graph nodes, and automatic countermeasure selection for the nodes under the risk. The software prototype that implements suggested solutions is briefly specified. The influence of the modifications on the security monitoring is shown on a case study, and the results of experiments are described.
2018-04-30
Korczynski, M., Tajalizadehkhoob, S., Noroozian, A., Wullink, M., Hesselman, C., v Eeten, M..  2017.  Reputation Metrics Design to Improve Intermediary Incentives for Security of TLDs. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :579–594.

Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.

Cowart, R., Coe, D., Kulick, J., Milenković, A..  2017.  An Implementation and Experimental Evaluation of Hardware Accelerated Ciphers in All-Programmable SoCs. Proceedings of the SouthEast Conference. :34–41.

The protection of confidential information has become very important with the increase of data sharing and storage on public domains. Data confidentiality is accomplished through the use of ciphers that encrypt and decrypt the data to impede unauthorized access. Emerging heterogeneous platforms provide an ideal environment to use hardware acceleration to improve application performance. In this paper, we explore the performance benefits of an AES hardware accelerator versus the software implementation for multiple cipher modes on the Zynq 7000 All-Programmable System-on-a-Chip (SoC). The accelerator is implemented on the FPGA fabric of the SoC and utilizes DMA for interfacing to the CPU. File encryption and decryption of varying file sizes are used as the workload, with execution time and throughput as the metrics for comparing the performance of the hardware and software implementations. The performance evaluations show that the accelerated AES operations achieve a speedup of 7 times relative to its software implementation and throughput upwards of 350 MB/s for the counter cipher mode, and modest improvements for other cipher modes.

Li, Meng, Lai, Liangzhen, Chandra, Vikas, Pan, David Z..  2017.  Cross-Level Monte Carlo Framework for System Vulnerability Evaluation Against Fault Attack. Proceedings of the 54th Annual Design Automation Conference 2017. :17:1–17:6.

Fault attack becomes a serious threat to system security and requires to be evaluated in the design stage. Existing methods usually ignore the intrinsic uncertainty in attack process and suffer from low scalability. In this paper, we develop a general framework to evaluate system vulnerability against fault attack. A holistic model for fault injection is incorporated to capture the probabilistic nature of attack process. Based on the probabilistic model, a security metric named as System Security Factor (SSF) is defined to measure the system vulnerability. In the framework, a Monte Carlo method is leveraged to enable a feasible evaluation of SSF for different systems, security policies, and attack techniques. We enhance the framework with a novel system pre-characterization procedure, based on which an importance sampling strategy is proposed. Experimental results on a commercial processor demonstrate that compared to random sampling, a 2500X speedup is achieved with the proposed sampling strategy. Meanwhile, 3% registers are identified to contribute to more than 95% SSF. By hardening these registers, a 6.5X security improvement can be achieved with less than 2% area overhead.

Farraj, Abdallah, Hammad, Eman, Kundur, Deepa.  2017.  Performance Metrics for Storage-Based Transient Stability Control. Proceedings of the 2Nd Workshop on Cyber-Physical Security and Resilience in Smart Grids. :9–14.

In this work we investigate existing and new metrics for evaluating transient stability of power systems to quantify the impact of distributed control schemes. Specifically, an energy storage system (ESS)-based control scheme that builds on feedback linearization theory is implemented in the power system to enhance its transient stability. We study the value of incorporating such ESS-based distributed control on specific transient stability metrics that include critical clearing time, critical control activation time, system stability time, rotor angle stability index, rotor speed stability index, rate of change of frequency, and control power. The stability metrics are evaluated using the IEEE 68-bus test power system. Numerical results demonstrate the value of the distributed control scheme in enhancing the transient stability metrics of power systems.

Li, Huan, Guo, Chen, Wang, Donglin.  2017.  Hybrid Sorting Method for Successive Cancellation List Decoding of Polar Codes. Proceedings of the 2017 the 7th International Conference on Communication and Network Security. :23–26.

This paper proposes a hybrid metric sorting method (HMS) of successive cancellation list decoders for polar codes, which plays a critical role in decoding process. We review the state-of-the-art metric sorting methods and combine the advantages of them to generate the proposed method. Due to the optimized architecture, the proposed HMS method reduces the number of comparing stages effectively with little increase in comparisons. Evaluation results show that about 25 percent of comparing stages can be removed by HMS, compared with state-of-the-art methods. The proposed method enjoys a latency reduction for hardware implementation.

Mahdi, Fatna El, Habbani, Ahmed, Mouchfiq, Nada, Essaid, Bilal.  2017.  Study of Security in MANETs and Evaluation of Network Performance Using ETX Metric. Proceedings of the 2017 International Conference on Smart Digital Environment. :220–228.

Today, we witness the emergence of smart environments, where devices are able to connect independently without human- intervention. Mobile ad hoc networks are an example of smart environments that are widely deployed in public spaces. They offer great services and features compared with wired systems. However, these networks are more sensitive to malicious attacks because of the lack of infrastructure and the self-organizing nature of devices. Thus, communication between nodes is much more exposed to various security risks, than other networks. In this paper, we will present a synthetic study on security concept for MANETs, and then we will introduce a contribution based on evaluating link quality, using ETX metric, to enhance network availability.

Halunen, Kimmo, Karinsalo, Anni.  2017.  Measuring the Value of Privacy and the Efficacy of PETs. Proceedings of the 11th European Conference on Software Architecture: Companion Proceedings. :132–135.

Privacy is a very active subject of research and also of debate in the political circles. In order to make good decisions about privacy, we need measurement systems for privacy. Most of the traditional measures such as k-anonymity lack expressiveness in many cases. We present a privacy measuring framework, which can be used to measure the value of privacy to an individual and also to evaluate the efficacy of privacy enhancing technologies. Our method is centered on a subject, whose privacy can be measured through the amount and value of information learned about the subject by some observers. This gives rise to interesting probabilistic models for the value of privacy and measures for privacy enhancing technologies.

Eberz, Simon, Rasmussen, Kasper B., Lenders, Vincent, Martinovic, Ivan.  2017.  Evaluating Behavioral Biometrics for Continuous Authentication: Challenges and Metrics. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :386–399.

In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.

2018-04-02
Doynikova, E., Kotenko, I..  2017.  CVSS-Based Probabilistic Risk Assessment for Cyber Situational Awareness and Countermeasure Selection. 2017 25th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). :346–353.

The paper suggests several techniques for computer network risk assessment based on Common Vulnerability Scoring System (CVSS) and attack modeling. Techniques use a set of integrated security metrics and consider input data from security information and event management (SIEM) systems. Risk assessment techniques differ according to the used input data. They allow to get risk assessment considering requirements to the accuracy and efficiency. Input data includes network characteristics, attacks, attacker characteristics, security events and countermeasures. The tool that implements these techniques is presented. Experiments demonstrate operation of the techniques for different security situations.

Yousefi, M., Mtetwa, N., Zhang, Y., Tianfield, H..  2017.  A Novel Approach for Analysis of Attack Graph. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :7–12.

Attack graph technique is a common tool for the evaluation of network security. However, attack graphs are generally too large and complex to be understood and interpreted by security administrators. This paper proposes an analysis framework for security attack graphs for a given IT infrastructure system. First, in order to facilitate the discovery of interconnectivities among vulnerabilities in a network, multi-host multi-stage vulnerability analysis (MulVAL) is employed to generate an attack graph for a given network topology. Then a novel algorithm is applied to refine the attack graph and generate a simplified graph called a transition graph. Next, a Markov model is used to project the future security posture of the system. Finally, the framework is evaluated by applying it on a typical IT network scenario with specific services, network configurations, and vulnerabilities.

2018-03-05
Yusuf, S. E., Ge, M., Hong, J. B., Alzaid, H., Kim, D. S..  2017.  Evaluating the Effectiveness of Security Metrics for Dynamic Networks. 2017 IEEE Trustcom/BigDataSE/ICESS. :277–284.

It is difficult to assess the security of modern enterprise networks because they are usually dynamic with configuration changes (such as changes in topology, firewall rules, etc). Graphical security models (e.g., Attack Graphs and Attack Trees) and security metrics (e.g., attack cost, shortest attack path) are widely used to systematically analyse the security posture of network systems. However, there are problems using them to assess the security of dynamic networks. First, the existing graphical security models are unable to capture dynamic changes occurring in the networks over time. Second, the existing security metrics are not designed for dynamic networks such that their effectiveness to the dynamic changes in the network is still unknown. In this paper, we conduct a comprehensive analysis via simulations to evaluate the effectiveness of security metrics using a Temporal Hierarchical Attack Representation Model. Further, we investigate the varying effects of security metrics when changes are observed in the dynamic networks. Our experimental analysis shows that different security metrics have varying security posture changes with respect to changes in the network.