Visible to the public Biblio

Filters: Keyword is Security Risk Estimation  [Clear All Filters]
2020-03-09
Portolan, Michele, Savino, Alessandro, Leveugle, Regis, Di Carlo, Stefano, Bosio, Alberto, Di Natale, Giorgio.  2019.  Alternatives to Fault Injections for Early Safety/Security Evaluations. 2019 IEEE European Test Symposium (ETS). :1–10.
Functional Safety standards like ISO 26262 require a detailed analysis of the dependability of components subjected to perturbations. Radiation testing or even much more abstract RTL fault injection campaigns are costly and complex to set up especially for SoCs and Cyber Physical Systems (CPSs) comprising intertwined hardware and software. Moreover, some approaches are only applicable at the very end of the development cycle, making potential iterations difficult when market pressure and cost reduction are paramount. In this tutorial, we present a summary of classical state-of-the-art approaches, then alternative approaches for the dependability analysis that can give an early yet accurate estimation of the safety or security characteristics of HW-SW systems. Designers can rely on these tools to identify issues in their design to be addressed by protection mechanisms, ensuring that system dependability constraints are met with limited risk when subjected later to usual fault injections and to e.g., radiation testing or laser attacks for certification.
Zhai, Liming, Wang, Lina, Ren, Yanzhen.  2019.  Multi-domain Embedding Strategies for Video Steganography by Combining Partition Modes and Motion Vectors. 2019 IEEE International Conference on Multimedia and Expo (ICME). :1402–1407.
Digital video has various types of entities, which are utilized as embedding domains to hide messages in steganography. However, nearly all video steganography uses only one type of embedding domain, resulting in limited embedding capacity and potential security risks. In this paper, we firstly propose to embed in multi-domains for video steganography by combining partition modes (PMs) and motion vectors (MVs). The multi-domain embedding (MDE) aims to spread the modifications to different embedding domains for achieving higher undetectability. The key issue of MDE is the interactions of entities across domains. To this end, we design two MDE strategies, which hide data in PM domain and MV domain by sequential embedding and simultaneous embedding respectively. These two strategies can be applied to existing steganography within a distortion-minimization framework. Experiments show that the MDE strategies achieve a significant improvement in security performance against targeted steganalysis and fusion based steganalysis.
Hasnat, Md Abul, Rahnamay-Naeini, Mahshid.  2019.  A Data-Driven Dynamic State Estimation for Smart Grids under DoS Attack using State Correlations. 2019 North American Power Symposium (NAPS). :1–6.
The denial-of-service (DoS) attack is a very common type of cyber attack that can affect critical cyber-physical systems, such as smart grids, by hampering the monitoring and control of the system, for example, creating unavailability of data from the attacked zone. While developing countermeasures can help reduce such risks, it is essential to develop techniques to recover from such scenarios if they occur by estimating the state of the system. Considering the continuous data-stream from the PMUs as time series, this work exploits the bus-to-bus cross-correlations to estimate the state of the system's components under attack using the PMU data of the rest of the buses. By applying this technique, the state of the power system can be estimated under various DoS attack sizes with great accuracy. The estimation accuracy in terms of the mean squared error (MSE) has been used to identify the relative vulnerability of the PMUs of the grid and the most vulnerable time for the DoS attack.
Xiaoxin, LOU, Xiulan, SONG, Defeng, HE, Liming, MENG.  2019.  Secure estimation for intelligent connected vehicle systems against sensor attacks. 2019 Chinese Control Conference (CCC). :6658–6662.
Intelligent connected vehicle system tightly integrates computing, communication, and control strategy. It can increase the traffic throughput, minimize the risk of accidents and reduce the energy consumption. However, because of the openness of the vehicular ad hoc network, the system is vulnerable to cyber-attacks and may result in disastrous consequences. Hence, it is interesting in design of the connected vehicular systems to be resilient to the sensor attacks. The paper focuses on the estimation and control of the intelligent connected vehicle systems when the sensors or the wireless channels of the system are attacked by attackers. We give the upper bound of the corrupted sensors that can be corrected and design the state estimator to reconstruct the initial state by designing a closed-loop controller. Finally, we verify the algorithm for the connected vehicle system by some classical simulations.
Tun, Hein, Lupin, Sergey, Than, Ba Hla, Nay Zaw Linn, Kyaw, Khaing, Min Thu.  2019.  Estimation of Information System Security Using Hybrid Simulation in AnyLogic. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1829–1834.
Nowadays the role of Information systems in our life has greatly increased, which has become one of the biggest challenges for citizens, organizations and governments. Every single day we are becoming more and more dependent on information and communication technology (ICT). A major goal of information security is to find the best ways to mitigate the risks. The context-role and perimeter protection approaches can reduce and prevent an unauthorized penetration to protected zones and information systems inside the zones. The result of this work can be useful for the security system analysis and optimization of their organizations.
Niemiec, Marcin, Jaglarz, Piotr, Jekot, Marcin, Chołda, Piotr, Boryło, Piotr.  2019.  Risk Assessment Approach to Secure Northbound Interface of SDN Networks. 2019 International Conference on Computing, Networking and Communications (ICNC). :164–169.
The most significant threats to networks usually originate from external entities. As such, the Northbound interface of SDN networks which ensures communication with external applications requires particularly close attention. In this paper we propose the Risk Assessment and Management approach to SEcure SDN (RAMSES). This novel solution is able to estimate the risk associated with traffic demand requests received via the Northbound-API in SDN networks. RAMSES quantifies the impact on network cost incurred by expected traffic demands and specifies the likelihood of adverse requests estimated using the reputation system. Accurate risk estimation allows SDN network administrators to make the right decisions and mitigate potential threat scenarios. This can be observed using extensive numerical verification based on an network optimization tool and several scenarios related to the reputation of the sender of the request. The verification of RAMSES confirmed the usefulness of its risk assessment approach to protecting SDN networks against threats associated with the Northbound-API.
Hettiarachchi, Charitha, Do, Hyunsook.  2019.  A Systematic Requirements and Risks-Based Test Case Prioritization Using a Fuzzy Expert System. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security (QRS). :374–385.

The use of risk information can help software engineers identify software components that are likely vulnerable or require extra attention when testing. Some studies have shown that the requirements risk-based approaches can be effective in improving the effectiveness of regression testing techniques. However, the risk estimation processes used in such approaches can be subjective, time-consuming, and costly. In this research, we introduce a fuzzy expert system that emulates human thinking to address the subjectivity related issues in the risk estimation process in a systematic and an efficient way and thus further improve the effectiveness of test case prioritization. Further, the required data for our approach was gathered by employing a semi-automated process that made the risk estimation process less subjective. The empirical results indicate that the new prioritization approach can improve the rate of fault detection over several existing test case prioritization techniques, while reducing threats to subjective risk estimation.

Sion, Laurens, Van Landuyt, Dimitri, Wuyts, Kim, Joosen, Wouter.  2019.  Privacy Risk Assessment for Data Subject-Aware Threat Modeling. 2019 IEEE Security and Privacy Workshops (SPW). :64–71.
Regulatory efforts such as the General Data Protection Regulation (GDPR) embody a notion of privacy risk that is centered around the fundamental rights of data subjects. This is, however, a fundamentally different notion of privacy risk than the one commonly used in threat modeling which is largely agnostic of involved data subjects. This mismatch hampers the applicability of privacy threat modeling approaches such as LINDDUN in a Data Protection by Design (DPbD) context. In this paper, we present a data subject-aware privacy risk assessment model in specific support of privacy threat modeling activities. This model allows the threat modeler to draw upon a more holistic understanding of privacy risk while assessing the relevance of specific privacy threats to the system under design. Additionally, we propose a number of improvements to privacy threat modeling, such as enriching Data Flow Diagram (DFD) system models with appropriate risk inputs (e.g., information on data types and involved data subjects). Incorporation of these risk inputs in DFDs, in combination with a risk estimation approach using Monte Carlo simulations, leads to a more comprehensive assessment of privacy risk. The proposed risk model has been integrated in threat modeling tool prototype and validated in the context of a realistic eHealth application.
Francesca Carfora, Maria, Orlando, Albina.  2019.  Quantile based risk measures in cyber security. 2019 International Conference on Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–4.
Measures and methods used in financial sector to quantify risk, have been recently applied to cyber world. The aim is to help organizations to improve risk management strategies and to wisely plan investments in cyber security. On the other hand, they are useful instruments for insurance companies in pricing cyber insurance contracts and setting the minimum capital requirements defined by the regulators. In this paper we propose an estimation of Value at Risk (VaR), referred to as Cyber Value at Risk in cyber security domain, and Tail Value at risk (TVaR). The data breach information we use is obtained from the “Chronology of data breaches” compiled by the Privacy Rights Clearinghouse.
2019-10-23
Alshawish, Ali, Spielvogel, Korbinian, de Meer, Hermann.  2019.  A Model-Based Time-to-Compromise Estimator to Assess the Security Posture of Vulnerable Networks. 2019 International Conference on Networked Systems (NetSys). :1-3.

Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.

Isaeva, N. A..  2018.  Choice of Control Parameters of Complex System on the Basis of Estimates of the Risks. 2018 Eleventh International Conference "Management of Large-Scale System Development" (MLSD. :1-4.

The method of choice the control parameters of a complex system based on estimates of the risks is proposed. The procedure of calculating the estimates of risks intended for a choice of rational managing directors of influences by an allocation of the group of the operating factors for the set criteria factor is considered. The purpose of choice of control parameters of the complex system is the minimization of an estimate of the risk of the functioning of the system by mean of a solution of a problem of search of an extremum of the function of many variables. The example of a choice of the operating factors in the sphere of intangible assets is given.

Zieger, Andrej, Freiling, Felix, Kossakowski, Klaus-Peter.  2018.  The $\beta$-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115-133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

Bahirat, Kanchan, Shah, Umang, Cardenas, Alvaro A., Prabhakaran, Balakrishnan.  2018.  ALERT: Adding a Secure Layer in Decision Support for Advanced Driver Assistance System (ADAS). Proceedings of the 26th ACM International Conference on Multimedia. :1984-1992.

With the ever-increasing popularity of LiDAR (Light Image Detection and Ranging) sensors, a wide range of applications such as vehicle automation and robot navigation are developed utilizing the 3D LiDAR data. Many of these applications involve remote guidance - either for safety or for the task performance - of these vehicles and robots. Research studies have exposed vulnerabilities of using LiDAR data by considering different security attack scenarios. Considering the security risks associated with the improper behavior of these applications, it has become crucial to authenticate the 3D LiDAR data that highly influence the decision making in such applications. In this paper, we propose a framework, ALERT (Authentication, Localization, and Estimation of Risks and Threats), as a secure layer in the decision support system used in the navigation control of vehicles and robots. To start with, ALERT tamper-proofs 3D LiDAR data by employing an innovative mechanism for creating and extracting a dynamic watermark. Next, when tampering is detected (because of the inability to verify the dynamic watermark), ALERT then carries out cross-modal authentication for localizing the tampered region. Finally, ALERT estimates the level of risk and threat based on the temporal and spatial nature of the attacks on LiDAR data. This estimation of risk and threats can then be incorporated into the decision support system used by ADAS (Advanced Driver Assistance System). We carried out several experiments to evaluate the efficacy of the proposed ALERT for ADAS and the experimental results demonstrate the effectiveness of the proposed approach.

Redmiles, Elissa M., Mazurek, Michelle L., Dickerson, John P..  2018.  Dancing Pigs or Externalities?: Measuring the Rationality of Security Decisions Proceedings of the 2018 ACM Conference on Economics and Computation. :215-232.

Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains.

Dutta, Raj Gautam, Yu, Feng, Zhang, Teng, Hu, Yaodan, Jin, Yier.  2018.  Security for Safety: A Path Toward Building Trusted Autonomous Vehicles. Proceedings of the International Conference on Computer-Aided Design. :92:1-92:6.

Automotive systems have always been designed with safety in mind. In this regard, the functional safety standard, ISO 26262, was drafted with the intention of minimizing risk due to random hardware faults or systematic failure in design of electrical and electronic components of an automobile. However, growing complexity of a modern car has added another potential point of failure in the form of cyber or sensor attacks. Recently, researchers have demonstrated that vulnerability in vehicle's software or sensing units could enable them to remotely alter the intended operation of the vehicle. As such, in addition to safety, security should be considered as an important design goal. However, designing security solutions without the consideration of safety objectives could result in potential hazards. Consequently, in this paper we propose the notion of security for safety and show that by integrating safety conditions with our system-level security solution, which comprises of a modified Kalman filter and a Chi-squared detector, we can prevent potential hazards that could occur due to violation of safety objectives during an attack. Furthermore, with the help of a car-following case study, where the follower car is equipped with an adaptive-cruise control unit, we show that our proposed system-level security solution preserves the safety constraints and prevent collision between vehicle while under sensor attack.

Karmaker Santu, Shubhra Kanti, Bindschadler, Vincent, Zhai, ChengXiang, Gunter, Carl A..  2018.  NRF: A Naive Re-Identification Framework. Proceedings of the 2018 Workshop on Privacy in the Electronic Society. :121-132.

The promise of big data relies on the release and aggregation of data sets. When these data sets contain sensitive information about individuals, it has been scalable and convenient to protect the privacy of these individuals by de-identification. However, studies show that the combination of de-identified data sets with other data sets risks re-identification of some records. Some studies have shown how to measure this risk in specific contexts where certain types of public data sets (such as voter roles) are assumed to be available to attackers. To the extent that it can be accomplished, such analyses enable the threat of compromises to be balanced against the benefits of sharing data. For example, a study that might save lives by enabling medical research may be enabled in light of a sufficiently low probability of compromise from sharing de-identified data. In this paper, we introduce a general probabilistic re-identification framework that can be instantiated in specific contexts to estimate the probability of compromises based on explicit assumptions. We further propose a baseline of such assumptions that enable a first-cut estimate of risk for practical case studies. We refer to the framework with these assumptions as the Naive Re-identification Framework (NRF). As a case study, we show how we can apply NRF to analyze and quantify the risk of re-identification arising from releasing de-identified medical data in the context of publicly-available social media data. The results of this case study show that NRF can be used to obtain meaningful quantification of the re-identification risk, compare the risk of different social media, and assess risks of combinations of various demographic attributes and medical conditions that individuals may voluntarily disclose on social media.

Davari, Maryam, Bertino, Elisa.  2018.  Reactive Access Control Systems. Proceedings of the 23Nd ACM on Symposium on Access Control Models and Technologies. :205-207.

In context-aware applications, user's access privileges rely on both user's identity and context. Access control rules are usually statically defined while contexts and the system state can change dynamically. Changes in contexts can result in service disruptions. To address this issue, this poster proposes a reactive access control system that associates contingency plans with access control rules. Risk scores are also associated with actions part of the contingency plans. Such risks are estimated by using fuzzy inference. Our approach is cast into the XACML reference architecture.

Kontogeorgis, Dimitrios, Limniotis, Konstantinos, Kantzavelou, Ioanna.  2018.  An Evaluation of the HTTPS Adoption in Websites in Greece: Estimating the Users Awareness. Proceedings of the 22Nd Pan-Hellenic Conference on Informatics. :46-51.

The adoption of the HTTPS - i.e. HTTP over TLS - protocol by the Hellenic websites is studied in this work. Since this protocol constitutes a de-facto standard for secure communications in the web, our aim is to identify whether the underlying TLS protocol in popular websites in Greece is properly configured, so as to avoid known vulnerabilities. To this end, a systematic approach utilizing two well-known TLS scanner tools is adopted to evaluate 241 sites of high popularity. The results illustrate that only about half of the sites seem to be at a satisfactory level and, thus, there is still much room for improvement, mainly due to the fact that obsolete ciphers and/or protocol versions are still supported; there is also a small portion - i.e. about 3% of the sites - that do not implement the HTTPS at all, thus posing very high security risks for their users who provide their credentials via a totally insecure channel. We also examined, using an appropriate online questionnaire, whether the users are actually aware of what the HTTPS means and how they check the security of the websites. The outcome of this research shows that much work needs to be done to increase the knowledge and the security awareness of an average Internet user.

McNeil, Martha, Llansó, Thomas, Pearson, Dallas.  2018.  Application of Capability-Based Cyber Risk Assessment Methodology to a Space System. Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. :7:1-7:10.

Despite more than a decade of heightened focus on cybersecurity, cyber threats remain an ongoing and growing concern [1]-[3]. Stakeholders often perform cyber risk assessments in order to understand potential mission impacts due to cyber threats. One common approach to cyber risk assessment is event-based analysis which usually considers adverse events, effects, and paths through a system, then estimates the effort/likelihood and mission impact of such attacks. When conducted manually, this type of approach is labor-intensive, subjective, and does not scale well to complex systems. As an alternative, we present an automated capability-based risk assessment approach, compare it to manual event-based analysis approaches, describe its application to a notional space system ground segment, and discuss the results.

2018-02-06
Allodi, Luca, Massacci, Fabio.  2017.  Attack Potential in Impact and Complexity. Proceedings of the 12th International Conference on Availability, Reliability and Security. :32:1–32:6.

Vulnerability exploitation is reportedly one of the main attack vectors against computer systems. Yet, most vulnerabilities remain unexploited by attackers. It is therefore of central importance to identify vulnerabilities that carry a high 'potential for attack'. In this paper we rely on Symantec data on real attacks detected in the wild to identify a trade-off in the Impact and Complexity of a vulnerability in terms of attacks that it generates; exploiting this effect, we devise a readily computable estimator of the vulnerability's Attack Potential that reliably estimates the expected volume of attacks against the vulnerability. We evaluate our estimator performance against standard patching policies by measuring foiled attacks and demanded workload expressed as the number of vulnerabilities entailed to patch. We show that our estimator significantly improves over standard patching policies by ruling out low-risk vulnerabilities, while maintaining invariant levels of coverage against attacks in the wild. Our estimator can be used as a first aid for vulnerability prioritisation to focus assessment efforts on high-potential vulnerabilities.

Pan, Liuxuan, Tomlinson, Allan, Koloydenko, Alexey A..  2017.  Time Pattern Analysis of Malware by Circular Statistics. Proceedings of the Symposium on Architectures for Networking and Communications Systems. :119–130.

Circular statistics present a new technique to analyse the time patterns of events in the field of cyber security. We apply this technique to analyse incidents of malware infections detected by network monitoring. In particular we are interested in the daily and weekly variations of these events. Based on "live" data provided by Spamhaus, we examine the hypothesis that attacks on four countries are distributed uniformly over 24 hours. Specifically, we use Rayleigh and Watson tests. While our results are mainly exploratory, we are able to demonstrate that the attacks are not uniformly distributed, nor do they follow a Poisson distribution as reported in other research. Our objective in this is to identify a distribution that can be used to establish risk metrics. Moreover, our approach provides a visual overview of the time patterns' variation, indicating when attacks are most likely. This will assist decision makers in cyber security to allocate resources or estimate the cost of system monitoring during high risk periods. Our results also reveal that the time patterns are influenced by the total number of attacks. Networks subject to a large volume of attacks exhibit bimodality while one case, where attacks were at relatively lower rate, showed a multi-modal daily variation.

Jonker, Mattijs, King, Alistair, Krupp, Johannes, Rossow, Christian, Sperotto, Anna, Dainotti, Alberto.  2017.  Millions of Targets Under Attack: A Macroscopic Characterization of the DoS Ecosystem. Proceedings of the 2017 Internet Measurement Conference. :100–113.

Denial-of-Service attacks have rapidly increased in terms of frequency and intensity, steadily becoming one of the biggest threats to Internet stability and reliability. However, a rigorous comprehensive characterization of this phenomenon, and of countermeasures to mitigate the associated risks, faces many infrastructure and analytic challenges. We make progress toward this goal, by introducing and applying a new framework to enable a macroscopic characterization of attacks, attack targets, and DDoS Protection Services (DPSs). Our analysis leverages data from four independent global Internet measurement infrastructures over the last two years: backscatter traffic to a large network telescope; logs from amplification honeypots; a DNS measurement platform covering 60% of the current namespace; and a DNS-based data set focusing on DPS adoption. Our results reveal the massive scale of the DoS problem, including an eye-opening statistic that one-third of all / 24 networks recently estimated to be active on the Internet have suffered at least one DoS attack over the last two years. We also discovered that often targets are simultaneously hit by different types of attacks. In our data, Web servers were the most prominent attack target; an average of 3% of the Web sites in .com, .net, and .org were involved with attacks, daily. Finally, we shed light on factors influencing migration to a DPS.

Jain, Bhushan, Tsai, Chia-Che, Porter, Donald E..  2017.  A Clairvoyant Approach to Evaluating Software (In)Security. Proceedings of the 16th Workshop on Hot Topics in Operating Systems. :62–68.

Nearly all modern software has security flaws–-either known or unknown by the users. However, metrics for evaluating software security (or lack thereof) are noisy at best. Common evaluation methods include counting the past vulnerabilities of the program, or comparing the size of the Trusted Computing Base (TCB), measured in lines of code (LoC) or binary size. Other than deleting large swaths of code from project, it is difficult to assess whether a code change decreased the likelihood of a future security vulnerability. Developers need a practical, constructive way of evaluating security. This position paper argues that we actually have all the tools needed to design a better, empirical method of security evaluation. We discuss related work that estimates the severity and vulnerability of certain attack vectors based on code properties that can be determined via static analysis. This paper proposes a grand, unified model that can predict the risk and severity of vulnerabilities in a program. Our prediction model uses machine learning to correlate these code features of open-source applications with the history of vulnerabilities reported in the CVE (Common Vulnerabilities and Exposures) database. Based on this model, one can incorporate an analysis into the standard development cycle that predicts whether the code is becoming more or less prone to vulnerabilities.

Bullough, Benjamin L, Yanchenko, Anna K, Smith, Christopher L, Zipkin, Joseph R.  2017.  Predicting Exploitation of Disclosed Software Vulnerabilities Using Open-Source Data. Proceeding IWSPA '17 Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics.

Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.

 

Boukoros, Spyros, Katzenbeisser, Stefan.  2017.  Measuring Privacy in High Dimensional Microdata Collections. Proceedings of the 12th International Conference on Availability, Reliability and Security. :15:1–15:8.

Microdata is collected by companies in order to enhance their quality of service as well as the accuracy of their recommendation systems. These data often become publicly available after they have been sanitized. Recent reidentification attacks on publicly available, sanitized datasets illustrate the privacy risks involved in microdata collections. Currently, users have to trust the provider that their data will be safe in case data is published or if a privacy breach occurs. In this work, we empower users by developing a novel, user-centric tool for privacy measurement and a new lightweight privacy metric. The goal of our tool is to estimate users' privacy level prior to sharing their data with a provider. Hence, users can consciously decide whether to contribute their data. Our tool estimates an individuals' privacy level based on published popularity statistics regarding the items in the provider's database, and the users' microdata. In this work, we describe the architecture of our tool as well as a novel privacy metric, which is necessary for our setting where we do not have access to the provider's database. Our tool is user friendly, relying on smart visual results that raise privacy awareness. We evaluate our tool using three real world datasets, collected from major providers. We demonstrate strong correlations between the average anonymity set per user and the privacy score obtained by our metric. Our results illustrate that our tool which uses minimal information from the provider, estimates users' privacy levels comparably well, as if it had access to the actual database.