Biblio

Filters: Keyword is ACM CCS  [Clear All Filters]
2014-09-17
Hwang, JeeHyun, Williams, Laurie, Vouk, Mladen.  2014.  Access Control Policy Evolution: An Empirical Study. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :28:1–28:2.

Access Control Policies (ACPs) evolve. Understanding the trends and evolution patterns of ACPs could provide guidance about the reliability and maintenance of ACPs. Our research goal is to help policy authors improve the quality of ACP evolution based on the understanding of trends and evolution patterns in ACPs We performed an empirical study by analyzing the ACP changes over time for two systems: Security Enhanced Linux (SELinux), and an open-source virtual computing platform (VCL). We measured trends in terms of the number of policy lines and lines of code (LOC), respectively. We observed evolution patterns. For example, an evolution pattern st1 → st2 says that st1 (e.g., "read") evolves into st2 (e.g., "read" and "write"). This pattern indicates that policy authors add "write" permission in addition to existing "read" permission. We found that some of evolution patterns appear to occur more frequently.

Ray, Arnab, Cleaveland, Rance.  2014.  An Analysis Method for Medical Device Security. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :16:1–16:2.

This paper is a proposal for a poster. In it we describe a medical device security approach that researchers at Fraunhofer used to analyze different kinds of medical devices for security vulnerabilities. These medical devices were provided to Fraunhofer by a medical device manufacturer whose name we cannot disclose due to non-disclosure agreements.

Subramani, Shweta, Vouk, Mladen, Williams, Laurie.  2014.  An Analysis of Fedora Security Profile. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :35:1–35:2.

This paper examines security faults/vulnerabilities reported for Fedora. Results indicate that, at least in some situations, fault roughly constant may be used to guide estimation of residual vulnerabilities in an already released product, as well as possibly guide testing of the next version of the product.

Das, Anupam, Borisov, Nikita, Caesar, Matthew.  2014.  Analyzing an Adaptive Reputation Metric for Anonymity Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :11:1–11:11.

Low-latency anonymity systems such as Tor rely on intermediate relays to forward user traffic; these relays, however, are often unreliable, resulting in a degraded user experience. Worse yet, malicious relays may introduce deliberate failures in a strategic manner in order to increase their chance of compromising anonymity. In this paper we propose using a reputation metric that can profile the reliability of relays in an anonymity system based on users' past experience. The two main challenges in building a reputation-based system for an anonymity system are: first, malicious participants can strategically oscillate between good and malicious nature to evade detection, and second, an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. Our proposed framework addresses the former challenge by using a proportional-integral-derivative (PID) controller-based reputation metric that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time, and the latter by introducing a filtering scheme based on the evaluated reputation score to effectively discard relays mounting attacks. We collect data from the live Tor network and perform simulations to validate the proposed reputation-based filtering scheme. We show that an attacker does not gain any significant benefit by performing deliberate failures in the presence of the proposed reputation framework.

Schmerl, Bradley, Cámara, Javier, Gennari, Jeffrey, Garlan, David, Casanova, Paulo, Moreno, Gabriel A., Glazier, Thomas J., Barnes, Jeffrey M..  2014.  Architecture-based Self-protection: Composing and Reasoning About Denial-of-service Mitigations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :2:1–2:12.

Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.

Forget, Alain, Komanduri, Saranga, Acquisti, Alessandro, Christin, Nicolas, Cranor, Lorrie Faith, Telang, Rahul.  2014.  Building the Security Behavior Observatory: An Infrastructure for Long-term Monitoring of Client Machines. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :24:1–24:2.

We present an architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security of the collected data, which will inevitably include intimate participant behavioral data. Third, the SBO must serve our research interests, which will inevitably change as collected data is analyzed and interpreted. This short paper summarizes some of our design and implementation benefits and discusses a few hurdles and trade-offs to consider when designing such a data collection system.

He, Xiaofan, Dai, Huaiyu, Shen, Wenbo, Ning, Peng.  2014.  Channel Correlation Modeling for Link Signature Security Assessment. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :25:1–25:2.

It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in link signature (LS) for security assurance. However, we believe that this channel correlation model is questionable, and will lead to false sense of security. In this project, we focus on establishing correct modeling of channel correlation so as to facilitate proper guard zone designs for LS security in various wireless environments of interest.

Han, Yujuan, Lu, Wenlian, Xu, Shouhuai.  2014.  Characterizing the Power of Moving Target Defense via Cyber Epidemic Dynamics. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :10:1–10:12.

Moving Target Defense (MTD) can enhance the resilience of cyber systems against attacks. Although there have been many MTD techniques, there is no systematic understanding and quantitative characterization of the power of MTD. In this paper, we propose to use a cyber epidemic dynamics approach to characterize the power of MTD. We define and investigate two complementary measures that are applicable when the defender aims to deploy MTD to achieve a certain security goal. One measure emphasizes the maximum portion of time during which the system can afford to stay in an undesired configuration (or posture), without considering the cost of deploying MTD. The other measure emphasizes the minimum cost of deploying MTD, while accommodating that the system has to stay in an undesired configuration (or posture) for a given portion of time. Our analytic studies lead to algorithms for optimally deploying MTD.

Xu, Shouhuai.  2014.  Cybersecurity Dynamics. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :14:1–14:2.

We explore the emerging field of Cybersecurity Dynamics, a candidate foundation for the Science of Cybersecurity.

Venkatakrishnan, Roopak, Vouk, Mladen A..  2014.  Diversity-based Detection of Security Anomalies. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :29:1–29:2.

Detecting and preventing attacks before they compromise a system can be done using acceptance testing, redundancy based mechanisms, and using external consistency checking such external monitoring and watchdog processes. Diversity-based adjudication, is a step towards an oracle that uses knowable behavior of a healthy system. That approach, under best circumstances, is able to detect even zero-day attacks. In this approach we use functionally equivalent but in some way diverse components and we compare their output vectors and reactions for a given input vector. This paper discusses practical relevance of this approach in the context of recent web-service attacks.

Xu, Shouhuai.  2014.  Emergent Behavior in Cybersecurity. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :13:1–13:2.

We argue that emergent behavior is inherent to cybersecurity.

Huang, Jingwei, Nicol, David M..  2014.  Evidence-based Trust Reasoning. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :17:1–17:2.

Trust is a necessary component in cybersecurity. It is a common task for a system to make a decision about whether or not to trust the credential of an entity from another domain, issued by a third party. Generally, in the cyberspace, connected and interacting systems largely rely on each other with respect to security, privacy, and performance. In their interactions, one entity or system needs to trust others, and this "trust" frequently becomes a vulnerability of that system. Aiming at mitigating the vulnerability, we are developing a computational theory of trust, as a part of our efforts towards Science of Security. Previously, we developed a formal-semantics-based calculus of trust [3, 2], in which trust can be calculated based on a trustor's direct observation on the performance of the trustee, or based on a trust network. In this paper, we construct a framework for making trust reasoning based on the observed evidence. We take privacy in cloud computing as a driving application case [5].

Layman, Lucas, Diffo, Sylvain David, Zazworka, Nico.  2014.  Human Factors in Webserver Log File Analysis: A Controlled Experiment on Investigating Malicious Activity. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :9:1–9:11.

While automated methods are the first line of defense for detecting attacks on webservers, a human agent is required to understand the attacker's intent and the attack process. The goal of this research is to understand the value of various log fields and the cognitive processes by which log information is grouped, searched, and correlated. Such knowledge will enable the development of human-focused log file investigation technologies. We performed controlled experiments with 65 subjects (IT professionals and novices) who investigated excerpts from six webserver log files. Quantitative and qualitative data were gathered to: 1) analyze subject accuracy in identifying malicious activity; 2) identify the most useful pieces of log file information; and 3) understand the techniques and strategies used by subjects to process the information. Statistically significant effects were observed in the accuracy of identifying attacks and time taken depending on the type of attack. Systematic differences were also observed in the log fields used by high-performing and low-performing groups. The findings include: 1) new insights into how specific log data fields are used to effectively assess potentially malicious activity; 2) obfuscating factors in log data from a human cognitive perspective; and 3) practical implications for tools to support log file investigations.

Yang, Wei, Xiao, Xusheng, Pandita, Rahul, Enck, William, Xie, Tao.  2014.  Improving Mobile Application Security via Bridging User Expectations and Application Behaviors. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :32:1–32:2.

To keep malware out of mobile application markets, existing techniques analyze the security aspects of application behaviors and summarize patterns of these security aspects to determine what applications do. However, user expectations (reflected via user perception in combination with user judgment) are often not incorporated into such analysis to determine whether application behaviors are within user expectations. This poster presents our recent work on bridging the semantic gap between user perceptions of the application behaviors and the actual application behaviors.

Maass, Michael, Scherlis, William L., Aldrich, Jonathan.  2014.  In-nimbo Sandboxing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :1:1–1:12.

Sandboxes impose a security policy, isolating applications and their components from the rest of a system. While many sandboxing techniques exist, state of the art sandboxes generally perform their functions within the system that is being defended. As a result, when the sandbox fails or is bypassed, the security of the surrounding system can no longer be assured. We experiment with the idea of in-nimbo sandboxing, encapsulating untrusted computations away from the system we are trying to protect. The idea is to delegate computations that may be vulnerable or malicious to virtual machine instances in a cloud computing environment. This may not reduce the possibility of an in-situ sandbox compromise, but it could significantly reduce the consequences should that possibility be realized. To achieve this advantage, there are additional requirements, including: (1) A regulated channel between the local and cloud environments that supports interaction with the encapsulated application, (2) Performance design that acceptably minimizes latencies in excess of the in-situ baseline. To test the feasibility of the idea, we built an in-nimbo sandbox for Adobe Reader, an application that historically has been subject to significant attacks. We undertook a prototype deployment with PDF users in a large aerospace firm. In addition to thwarting several examples of existing PDF-based malware, we found that the added increment of latency, perhaps surprisingly, does not overly impair the user experience with respect to performance or usability.

Davis, Agnes, Shashidharan, Ashwin, Liu, Qian, Enck, William, McLaughlin, Anne, Watson, Benjamin.  2014.  Insecure Behaviors on Mobile Devices Under Stress. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :31:1–31:2.

One of the biggest challenges in mobile security is human behavior. The most secure password may be useless if it is sent as a text or in an email. The most secure network is only as secure as its most careless user. Thus, in the current project we sought to discover the conditions under which users of mobile devices were most likely to make security errors. This scaffolds a larger project where we will develop automatic ways of detecting such environments and eventually supporting users during these times to encourage safe mobile behaviors.

Layman, Lucas, Zazworka, Nico.  2014.  InViz: Instant Visualization of Security Attacks. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :15:1–15:2.

The InViz tool is a functional prototype that provides graphical visualizations of log file events to support real-time attack investigation. Through visualization, both experts and novices in cybersecurity can analyze patterns of application behavior and investigate potential cybersecurity attacks. The goal of this research is to identify and evaluate the cybersecurity information to visualize that reduces the amount of time required to perform cyber forensics.

Rao, Ashwini, Hibshi, Hanan, Breaux, Travis, Lehker, Jean-Michel, Niu, Jianwei.  2014.  Less is More?: Investigating the Role of Examples in Security Studies Using Analogical Transfer Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :7:1–7:12.

Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area.

Kästner, Christian, Pfeffer, Jürgen.  2014.  Limiting Recertification in Highly Configurable Systems: Analyzing Interactions and Isolation Among Configuration Options. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :23:1–23:2.

In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.

King, Jason, Williams, Laurie.  2014.  Log Your CRUD: Design Principles for Software Logging Mechanisms. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :5:1–5:10.

According to a 2011 survey in healthcare, the most commonly reported breaches of protected health information involved employees snooping into medical records of friends and relatives. Logging mechanisms can provide a means for forensic analysis of user activity in software systems by proving that a user performed certain actions in the system. However, logging mechanisms often inconsistently capture user interactions with sensitive data, creating gaps in traces of user activity. Explicit design principles and systematic testing of logging mechanisms within the software development lifecycle may help strengthen the overall security of software. The objective of this research is to observe the current state of logging mechanisms by performing an exploratory case study in which we systematically evaluate logging mechanisms by supplementing the expected results of existing functional black-box test cases to include log output. We perform an exploratory case study of four open-source electronic health record (EHR) logging mechanisms: OpenEMR, OSCAR, Tolven eCHR, and WorldVistA. We supplement the expected results of 30 United States government-sanctioned test cases to include log output to track access of sensitive data. We then execute the test cases on each EHR system. Six of the 30 (20%) test cases failed on all four EHR systems because user interactions with sensitive data are not logged. We find that viewing protected data is often not logged by default, allowing unauthorized views of data to go undetected. Based on our results, we propose a set of principles that developers should consider when developing logging mechanisms to ensure the ability to capture adequate traces of user activity.

Liu, Qian, Bae, Juhee, Watson, Benjamin, McLaughhlin, Anne, Enck, William.  2014.  Modeling and Sensing Risky User Behavior on Mobile Devices. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :33:1–33:2.

As mobile technology begins to dominate computing, understanding how their use impacts security becomes increasingly important. Fortunately, this challenge is also an opportunity: the rich set of sensors with which most mobile devices are equipped provide a rich contextual dataset, one that should enable mobile user behavior to be modeled well enough to predict when users are likely to act insecurely, and provide cognitively grounded explanations of those behaviors. We will evaluate this hypothesis with a series of experiments designed first to confirm that mobile sensor data can reliably predict user stress, and that users experiencing such stress are more likely to act insecurely.

Da, Gaofeng, Xu, Maochao, Xu, Shouhuai.  2014.  A New Approach to Modeling and Analyzing Security of Networked Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :6:1–6:12.

Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.

Feigenbaum, Joan, Jaggard, Aaron D., Wright, Rebecca N..  2014.  Open vs. Closed Systems for Accountability. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :4:1–4:11.

The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountability-related properties from the literature. Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utility-theoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model. In our use of this framework, we define notions of "open" and "closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.

Cao, Phuong, Li, Hongyang, Nahrstedt, Klara, Kalbarczyk, Zbigniew, Iyer, Ravishankar, Slagell, Adam J..  2014.  Personalized Password Guessing: A New Security Threat. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :22:1–22:2.

This paper presents a model for generating personalized passwords (i.e., passwords based on user and service profile). A user's password is generated from a list of personalized words, each word is drawn from a topic relating to a user and the service in use. The proposed model can be applied to: (i) assess the strength of a password (i.e., determine how many guesses are used to crack the password), and (ii) generate secure (i.e., contains digits, special characters, or capitalized characters) yet easy to memorize passwords.

Tembe, Rucha, Zielinska, Olga, Liu, Yuqi, Hong, Kyung Wha, Murphy-Hill, Emerson, Mayhorn, Chris, Ge, Xi.  2014.  Phishing in International Waters: Exploring Cross-national Differences in Phishing Conceptualizations Between Chinese, Indian and American Samples. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :8:1–8:7.

One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions.