Visible to the public Biblio

Filters: Keyword is security metrics  [Clear All Filters]
Conference Paper
Noureddine, M. A., Marturano, A., Keefe, K., Bashir, M., Sanders, W. H..  2017.  Accounting for the Human User in Predictive Security Models. 2017 IEEE 22nd Pacific Rim International Symposium on Dependable Computing (PRDC). :329–338.

Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.

Chhillar, Dheeraj, Sharma, Kalpana.  2019.  ACT Testbot and 4S Quality Metrics in XAAS Framework. 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). :503–509.

The purpose of this paper is to analyze all Cloud based Service Models, Continuous Integration, Deployment and Delivery process and propose an Automated Continuous Testing and testing as a service based TestBot and metrics dashboard which will be integrated with all existing automation, bug logging, build management, configuration and test management tools. Recently cloud is being used by organizations to save time, money and efforts required to setup and maintain infrastructure and platform. Continuous Integration and Delivery is in practice nowadays within Agile methodology to give capability of multiple software releases on daily basis and ensuring all the development, test and Production environments could be synched up quickly. In such an agile environment there is need to ramp up testing tools and processes so that overall regression testing including functional, performance and security testing could be done along with build deployments at real time. To support this phenomenon, we researched on Continuous Testing and worked with industry professionals who are involved in architecting, developing and testing the software products. A lot of research has been done towards automating software testing so that testing of software product could be done quickly and overall testing process could be optimized. As part of this paper we have proposed ACT TestBot tool, metrics dashboard and coined 4S quality metrics term to quantify quality of the software product. ACT testbot and metrics dashboard will be integrated with Continuous Integration tools, Bug reporting tools, test management tools and Data Analytics tools to trigger automation scripts, continuously analyze application logs, open defects automatically and generate metrics reports. Defect pattern report will be created to support root cause analysis and to take preventive action.

Wortman, Paul A., Tehranipoor, Fatemeh, Chandy, John A..  2018.  An Adversarial Risk-based Approach for Network Architecture Security Modeling and Design. 2018 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–8.
Network architecture design and verification has become increasingly complicated as a greater number of security considerations, implementations, and factors are included in the design process. In the design process, one must account for various costs of interwoven layers of security. Generally these costs are simplified for evaluation of risk to the network. The obvious implications of adding security are the need to account for the impacts of loss (risk) and accounting for the ensuing increased design costs. The considerations that are not traditionally examined are those of the adversary and the defender of a given system. Without accounting for the view point of the individuals interacting with a network architecture, one can not verify and select the most advantageous security implementation. This work presents a method for obtaining a security metric that takes into account not only the risk of the defender, but also the probability of an attack originating from the motivation of the adversary. We then move to a more meaningful metric based on a monetary unit that architects can use in choosing a best fit solution for a given network critical path design problem.
Ahmed, Yussuf, Naqvi, Syed, Josephs, Mark.  2018.  Aggregation of Security Metrics for Decision Making: A Reference Architecture. Proceedings of the 12th European Conference on Software Architecture: Companion Proceedings. :53:1–53:7.
Existing security technologies play a significant role in protecting enterprise systems but they are no longer enough on their own given the number of successful cyberattacks against businesses and the sophistication of the tactics used by attackers to bypass the security defences. Security measurement is different to security monitoring in the sense that it provides a means to quantify the security of the systems while security monitoring helps in identifying abnormal events and does not measure the actual state of an infrastructure's security. The goal of enterprise security metrics is to enable understanding of the overall security using measurements to guide decision making. In this paper we present a reference architecture for aggregating the measurement values from the different components of the system in order to enable stakeholders to see the overall security state of their enterprise systems and to assist with decision making. This will provide a newer dimension to security management by shifting from security monitoring to security measurement.
Das, Anupam, Borisov, Nikita, Caesar, Matthew.  2014.  Analyzing an Adaptive Reputation Metric for Anonymity Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :11:1–11:11.

Low-latency anonymity systems such as Tor rely on intermediate relays to forward user traffic; these relays, however, are often unreliable, resulting in a degraded user experience. Worse yet, malicious relays may introduce deliberate failures in a strategic manner in order to increase their chance of compromising anonymity. In this paper we propose using a reputation metric that can profile the reliability of relays in an anonymity system based on users' past experience. The two main challenges in building a reputation-based system for an anonymity system are: first, malicious participants can strategically oscillate between good and malicious nature to evade detection, and second, an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. Our proposed framework addresses the former challenge by using a proportional-integral-derivative (PID) controller-based reputation metric that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time, and the latter by introducing a filtering scheme based on the evaluated reputation score to effectively discard relays mounting attacks. We collect data from the live Tor network and perform simulations to validate the proposed reputation-based filtering scheme. We show that an attacker does not gain any significant benefit by performing deliberate failures in the presence of the proposed reputation framework.

Medeiros, N., Ivaki, N., Costa, P., Vieira, M..  2018.  An Approach for Trustworthiness Benchmarking Using Software Metrics. 2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC). :84–93.

Trustworthiness is a paramount concern for users and customers in the selection of a software solution, specially in the context of complex and dynamic environments, such as Cloud and IoT. However, assessing and benchmarking trustworthiness (worthiness of software for being trusted) is a challenging task, mainly due to the variety of application scenarios (e.g., businesscritical, safety-critical), the large number of determinative quality attributes (e.g., security, performance), and last, but foremost, due to the subjective notion of trust and trustworthiness. In this paper, we present trustworthiness as a measurable notion in relative terms based on security attributes and propose an approach for the assessment and benchmarking of software. The main goal is to build a trustworthiness assessment model based on software metrics (e.g., Cyclomatic Complexity, CountLine, CBO) that can be used as indicators of software security. To demonstrate the proposed approach, we assessed and ranked several files and functions of the Mozilla Firefox project based on their trustworthiness score and conducted a survey among several software security experts in order to validate the obtained rank. Results show that our approach is able to provide a sound ranking of the benchmarked software.

Mukherjee, Preetam, Mazumdar, Chandan.  2018.  Attack Difficulty Metric for Assessment of Network Security. Proceedings of the 13th International Conference on Availability, Reliability and Security. :44:1-44:10.
In recent days, organizational networks are becoming target of sophisticated multi-hop attacks. Attack Graph has been proposed as a useful modeling tool for complex attack scenarios by combining multiple vulnerabilities in causal chains. Analysis of attack scenarios enables security administrators to calculate quantitative security measurements. These measurements justify security investments in the organization. Different security metrics based on attack graph have been introduced for evaluation of comparable security measurements. Studies show that difficulty of exploiting the same vulnerability changes with change of its position in the causal chains of attack graph. In this paper, a new security metric based on attack graph, namely Attack Difficulty has been proposed to include this position factor. The security metrics are classified in two major categories viz. counting metrics and difficulty-based metrics. The proposed Attack Difficulty Metric employs both categories of metrics as the basis for its measurement. Case studies have been presented for demonstrating applicability of the proposed metric. Comparison of this new metric with other attack graph based security metrics has also been included to validate its acceptance in real life situations.
Pope, Aaron Scott, Morning, Robert, Tauritz, Daniel R., Kent, Alexander D..  2018.  Automated Design of Network Security Metrics. Proceedings of the Genetic and Evolutionary Computation Conference Companion. :1680–1687.

Many abstract security measurements are based on characteristics of a graph that represents the network. These are typically simple and quick to compute but are often of little practical use in making real-world predictions. Practical network security is often measured using simulation or real-world exercises. These approaches better represent realistic outcomes but can be costly and time-consuming. This work aims to combine the strengths of these two approaches, developing efficient heuristics that accurately predict attack success. Hyper-heuristic machine learning techniques, trained on network attack simulation training data, are used to produce novel graph-based security metrics. These low-cost metrics serve as an approximation for simulation when measuring network security in real time. The approach is tested and verified using a simulation based on activity from an actual large enterprise network. The results demonstrate the potential of using hyper-heuristic techniques to rapidly evolve and react to emerging cybersecurity threats.

Taylor, Joshua, Zaffarano, Kara, Koller, Ben, Bancroft, Charlie, Syversen, Jason.  2016.  Automated Effectiveness Evaluation of Moving Target Defenses: Metrics for Missions and Attacks. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :129–134.

In this paper, we describe the results of several experiments designed to test two dynamic network moving target defenses against a propagating data exfiltration attack. We designed a collection of metrics to assess the costs to mission activities and the benefits in the face of attacks and evaluated the impacts of the moving target defenses in both areas. Experiments leveraged Siege's Cyber-Quantification Framework to automatically provision the networks used in the experiment, install the two moving target defenses, collect data, and analyze the results. We identify areas in which the costs and benefits of the two moving target defenses differ, and note some of their unique performance characteristics.

Enoch, Simon Yusuf, Hong, Jin B., Ge, Mengmeng, Alzaid, Hani, Kim, Dong Seong.  2018.  Automated Security Investment Analysis of Dynamic Networks. Proceedings of the Australasian Computer Science Week Multiconference. :6:1-6:10.
It is important to assess the cost benefits of IT security investments. Typically, this is done by manual risk assessment process. In this paper, we propose an approach to automate this using graphical security models (GSMs). GSMs have been used to assess the security of networked systems using various security metrics. Most of the existing GSMs assumed that networks are static, however, modern networks (e.g., Cloud and Software Defined Networking) are dynamic with changes. Thus, it is important to develop an approach that takes into account the dynamic aspects of networks. To this end, we automate security investments analysis of dynamic networks using a GSM named Temporal-Hierarchical Attack Representation Model (T-HARM) in order to automatically evaluate the security investments and their effectiveness for a given period of time. We demonstrate our approach via simulations.
Munaiah, Nuthan, Meneely, Andrew.  2016.  Beyond the Attack Surface: Assessing Security Risk with Random Walks on Call Graphs. Proceedings of the 2016 ACM Workshop on Software PROtection. :3–14.

When reasoning about software security, researchers and practitioners use the phrase ``attack surface'' as a metaphor for risk. Enumerate and minimize the ways attackers can break in then risk is reduced and the system is better protected, the metaphor says. But software systems are much more complicated than their surfaces. We propose function- and file-level attack surface metrics–-proximity and risky walk–-that enable fine-grained risk assessment. Our risky walk metric is highly configurable: we use PageRank on a probability-weighted call graph to simulate attacker behavior of finding or exploiting a vulnerability. We provide evidence-based guidance for deploying these metrics, including an extensive parameter tuning study. We conducted an empirical study on two large open source projects, FFmpeg and Wireshark, to investigate the potential correlation between our metrics and historical post-release vulnerabilities. We found our metrics to be statistically significantly associated with vulnerable functions/files with a small-to-large Cohen's d effect size. Our prediction model achieved an increase of 36% (in FFmpeg) and 27% (in Wireshark) in the average value of F-measure over a base model built with SLOC and coupling metrics. Our prediction model outperformed comparable models from prior literature with notable improvements: 58% reduction in false negative rate, 81% reduction in false positive rate, and 548% increase in F-measure. These metrics advance vulnerability prevention by [(a)] being flexible in terms of granularity, performing better than vulnerability prediction literature, and being tunable so that practitioners can tailor the metrics to their products and better assess security risk.

Pan, K., Teixeira, A. M. H., Cvetkovic, M., Palensky, P..  2016.  Combined data integrity and availability attacks on state estimation in cyber-physical power grids. 2016 IEEE International Conference on Smart Grid Communications (SmartGridComm). :271–277.

This paper introduces combined data integrity and availability attacks to expand the attack scenarios against power system state estimation. The goal of the adversary, who uses the combined attack, is to perturb the state estimates while remaining hidden from the observer. We propose security metrics that quantify vulnerability of power grids to combined data attacks under single and multi-path routing communication models. In order to evaluate the proposed security metrics, we formulate them as mixed integer linear programming (MILP) problems. The relation between the security metrics of combined data attacks and pure data integrity attacks is analyzed, based on which we show that, when data availability and data integrity attacks have the same cost, the two metrics coincide. When data availability attacks have a lower cost than data integrity attacks, we show that a combined data attack could be executed with less attack resources compared to pure data integrity attacks. Furthermore, it is shown that combined data attacks would bypass integrity-focused mitigation schemes. These conclusions are supported by the results obtained on a power system model with and without a communication model with single or multi-path routing.

Llopis, S., Hingant, J., Pérez, I., Esteve, M., Carvajal, F., Mees, W., Debatty, T..  2018.  A Comparative Analysis of Visualisation Techniques to Achieve Cyber Situational Awareness in the Military. 2018 International Conference on Military Communications and Information Systems (ICMCIS). :1-7.
Starting from a common fictional scenario, simulated data sources and a set of measurements will feed two different visualization techniques with the aim to make a comparative analysis. Both visualization techniques described in this paper use the operational picture concept, deemed as the most appropriate tool for military commanders and their staff to achieve cyber situational awareness and to understand the cyber defence implications in operations. Cyber Common Operational Picture (CyCOP) is a tool developed by Universitat Politècnica de València in collaboration with the Spanish Ministry of Defence whose objective is to generate the Cyber Hybrid Situational Awareness (CyHSA). Royal Military Academy in Belgium developed a 3D Operational Picture able to display mission critical elements intuitively using a priori defined domain-knowledge. A comparative analysis will assist researchers in their way to progress solutions and implementation aspects.
Zhang, H., Lou, F., Fu, Y., Tian, Z..  2017.  A Conditional Probability Computation Method for Vulnerability Exploitation Based on CVSS. 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC). :238–241.
Computing the probability of vulnerability exploitation in Bayesian attack graphs (BAGs) is a key process for the network security assessment. The conditional probability of vulnerability exploitation could be obtained from the exploitability of the NIST's Common Vulnerability Scoring System (CVSS). However, the method which N. Poolsappasit et al. proposed for computing conditional probability could be used only in the CVSS metric version v2.0, and can't be used in other two versions. In this paper, we present two methods for computing the conditional probability based on CVSS's other two metric versions, version 1.0 and version 3.0, respectively. Based on the CVSS, the conditional probability computation of vulnerability exploitation is complete by combining the method of N. Poolsappasit et al.
Li, Meng, Lai, Liangzhen, Chandra, Vikas, Pan, David Z..  2017.  Cross-Level Monte Carlo Framework for System Vulnerability Evaluation Against Fault Attack. Proceedings of the 54th Annual Design Automation Conference 2017. :17:1–17:6.

Fault attack becomes a serious threat to system security and requires to be evaluated in the design stage. Existing methods usually ignore the intrinsic uncertainty in attack process and suffer from low scalability. In this paper, we develop a general framework to evaluate system vulnerability against fault attack. A holistic model for fault injection is incorporated to capture the probabilistic nature of attack process. Based on the probabilistic model, a security metric named as System Security Factor (SSF) is defined to measure the system vulnerability. In the framework, a Monte Carlo method is leveraged to enable a feasible evaluation of SSF for different systems, security policies, and attack techniques. We enhance the framework with a novel system pre-characterization procedure, based on which an importance sampling strategy is proposed. Experimental results on a commercial processor demonstrate that compared to random sampling, a 2500X speedup is achieved with the proposed sampling strategy. Meanwhile, 3% registers are identified to contribute to more than 95% SSF. By hardening these registers, a 6.5X security improvement can be achieved with less than 2% area overhead.

Kebande, V. R., Kigwana, I., Venter, H. S., Karie, N. M., Wario, R. D..  2018.  CVSS Metric-Based Analysis, Classification and Assessment of Computer Network Threats and Vulnerabilities. 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). :1–10.

This paper provides a Common Vulnerability Scoring System (CVSS) metric-based technique for classifying and analysing the prevailing Computer Network Security Vulnerabilities and Threats (CNSVT). The problem that is addressed in this paper, is that, at the time of writing this paper, there existed no effective approaches for analysing and classifying CNSVT for purposes of assessments based on CVSS metrics. The authors of this paper have achieved this by generating a CVSS metric-based dynamic Vulnerability Analysis Classification Countermeasure (VACC) criterion that is able to rank vulnerabilities. The CVSS metric-based VACC has allowed the computation of vulnerability Similarity Measure (VSM) using the Hamming and Euclidean distance metric functions. Nevertheless, the CVSS-metric based on VACC also enabled the random measuring of the VSM for a selected number of vulnerabilities based on the [Ma-Ma], [Ma-Mi], [Mi-Ci], [Ma-Ci] ranking score. This is a technique that is aimed at allowing security experts to be able to conduct proper vulnerability detection and assessments across computer-based networks based on the perceived occurrence by checking the probability that given threats will occur or not. The authors have also proposed high-level countermeasures of the vulnerabilities that have been listed. The authors have evaluated the CVSS-metric based VACC and the results are promising. Based on this technique, it is worth noting that these propositions can help in the development of stronger computer and network security tools.

Doynikova, E., Kotenko, I..  2017.  CVSS-Based Probabilistic Risk Assessment for Cyber Situational Awareness and Countermeasure Selection. 2017 25th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). :346–353.

The paper suggests several techniques for computer network risk assessment based on Common Vulnerability Scoring System (CVSS) and attack modeling. Techniques use a set of integrated security metrics and consider input data from security information and event management (SIEM) systems. Risk assessment techniques differ according to the used input data. They allow to get risk assessment considering requirements to the accuracy and efficiency. Input data includes network characteristics, attacks, attacker characteristics, security events and countermeasures. The tool that implements these techniques is presented. Experiments demonstrate operation of the techniques for different security situations.

Onwubiko, Cyril, Onwubiko, Austine.  2019.  Cyber KPI for Return on Security Investment. 2019 International Conference on Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

Cyber security return on investment (RoI) or return on security investment (RoSI) is extremely challenging to measure. This is partly because it is difficult to measure the actual cost of a cyber security incident or cyber security proceeds. This is further complicated by the fact that there are no consensus metrics that every organisation agrees to, and even among cyber subject matter experts, there are no set of agreed parameters or metric upon which cyber security benefits or rewards can be assessed against. One approach to demonstrating return on security investment is by producing cyber security reports of certain key performance indicators (KPI) and metrics, such as number of cyber incidents detected, number of cyber-attacks or terrorist attacks that were foiled, or ongoing monitoring capabilities. These are some of the demonstratable and empirical metrics that could be used to measure RoSI. In this abstract paper, we investigate some of the cyber KPIs and metrics to be considered for cyber dashboard and reporting for RoSI.

Al-Shaer, Ehab.  2016.  A Cyber Mutation: Metrics, Techniques and Future Directions. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :1–1.

After decades of cyber warfare, it is well-known that the static and predictable behavior of cyber configuration provides a great advantage to adversaries to plan and launch their attack successfully. At the same time, as cyber attacks are getting highly stealthy and more sophisticated, their detection and mitigation become much harder and expensive. We developed a new foundation for moving target defense (MTD) based on cyber mutation, as a new concept in cybersecurity to reverse this asymmetry in cyber warfare by embedding agility into cyber systems. Cyber mutation enables cyber systems to automatically change its configuration parameters in unpredictable, safe and adaptive manner in order to proactively achieve one or more of the following MTD goals: (1) deceiving attackers from reaching their goals, (2) disrupting their plans via changing adversarial behaviors, and (3) deterring adversaries by prohibitively increasing the attack effort and cost. In this talk, we will present the formal foundations, metrics and framework for developing effective cyber mutation techniques. The talk will also review several examples of developed techniques including Random Host Mutation, Random Rout Mutation, fingerprinting mutation, and mutable virtual networks. The talk will also address the evaluation and lessons learned for advancing the future research in this area.

Torkura, Kennedy A., Sukmana, Muhammad I.H., Kayem, Anne V.D.M., Cheng, Feng, Meinel, Christoph.  2018.  A Cyber Risk Based Moving Target Defense Mechanism for Microservice Architectures. 2018 IEEE Intl Conf on Parallel Distributed Processing with Applications, Ubiquitous Computing Communications, Big Data Cloud Computing, Social Computing Networking, Sustainable Computing Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom). :932–939.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
Thuraisingham, B., Kantarcioglu, M., Hamlen, K., Khan, L., Finin, T., Joshi, A., Oates, T., Bertino, E..  2016.  A Data Driven Approach for the Science of Cyber Security: Challenges and Directions. 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI). :1–10.

This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.

Munaiah, Nuthan, Meneely, Andrew.  2019.  Data-Driven Insights from Vulnerability Discovery Metrics. 2019 IEEE/ACM Joint 4th International Workshop on Rapid Continuous Software Engineering and 1st International Workshop on Data-Driven Decisions, Experimentation and Evolution (RCoSE/DDrEE). :1–7.

Software metrics help developers discover and fix mistakes. However, despite promising empirical evidence, vulnerability discovery metrics are seldom relied upon in practice. In prior research, the effectiveness of these metrics has typically been expressed using precision and recall of a prediction model that uses the metrics as explanatory variables. These prediction models, being black boxes, may not be perceived as useful by developers. However, by systematically interpreting the models and metrics, we can provide developers with nuanced insights about factors that have led to security mistakes in the past. In this paper, we present a preliminary approach to using vulnerability discovery metrics to provide insightful feedback to developers as they engineer software. We collected ten metrics (churn, collaboration centrality, complexity, contribution centrality, nesting, known offender, source lines of code, \# inputs, \# outputs, and \# paths) from six open-source projects. We assessed the generalizability of the metrics across two contextual dimensions (application domain and programming language) and between projects within a domain, computed thresholds for the metrics using an unsupervised approach from literature, and assessed the ability of these unsupervised thresholds to classify risk from historical vulnerabilities in the Chromium project. The observations from this study feeds into our ongoing research to automatically aggregate insights from the various analyses to generate natural language feedback on security. We hope that our approach to generate automated feedback will accelerate the adoption of research in vulnerability discovery metrics.

Yee, George O. M..  2019.  Designing Good Security Metrics. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 2:580–585.

This paper begins with an introduction to security metrics, describing the need for security metrics, followed by a discussion of the nature of security metrics, including the challenges found with some security metrics used in the past. The paper then discusses what makes a good security metric and proposes a rigorous step-by-step method that can be applied to design good security metrics, and to test existing security metrics to see if they are good metrics. Application examples are included to illustrate the method.

Barthe, Gilles, Farina, Gian Pietro, Gaboardi, Marco, Arias, Emilio Jesus Gallego, Gordon, Andy, Hsu, Justin, Strub, Pierre-Yves.  2016.  Differentially Private Bayesian Programming. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :68–79.

We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.

Casola, V., Benedictis, A. D., Rak, M., Villano, U..  2015.  DoS Protection in the Cloud through the SPECS Services. 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC). :677–682.

Security in cloud environments is always considered an issue, due to the lack of control over leased resources. In this paper, we present a solution that offers security-as-a-service by relying on Security Service Level Agreements (Security SLAs) as a means to represent the security features to be granted. In particular, we focus on a security mechanism that is automatically configured and activated in an as-a-service fashion in order to protect cloud resources against DoS attacks. The activities reported in this paper are part of a wider work carried out in the FP7-ICT programme project SPECS, which aims at building a framework offering Security-as-a-Service using an SLA-based approach. The proposed approach founds on the adoption of SPECS Services to negotiate, to enforce and to monitor suitable security metrics, chosen by cloud customers, negotiated with the provider and included in a signed Security SLA.