Visible to the public Biblio

Found 12055 results

2014-10-24
Hibshi, Hanan, Slavin, Rocky, Niu, Jianwei, Breaux, Travis D.  2014.  Rethinking Security Requirements in RE Research.

As information security became an increasing concern for software developers and users, requirements engineering (RE) researchers brought new insight to security requirements. Security requirements aim to address security at the early stages of system design while accommodating the complex needs of different stakeholders. Meanwhile, other research communities, such as usable privacy and security, have also examined these requirements with specialized goal to make security more usable for stakeholders from product owners, to system users and administrators. In this paper we report results from conducting a literature survey to compare security requirements research from RE Conferences with the Symposium on Usable Privacy and Security (SOUPS). We report similarities between the two research areas, such as common goals, technical definitions, research problems, and directions. Further, we clarify the differences between these two communities to understand how they can leverage each other’s insights. From our analysis, we recommend new directions in security requirements research mainly to expand the meaning of security requirements in RE to reflect the technological advancements that the broader field of security is experiencing. These recommendations to encourage cross- collaboration with other communities are not limited to the security requirements area; in fact, we believe they can be generalized to other areas of RE. 

Breaux, T.D., Hibshi, H., Rao, A, Lehker, J..  2012.  Towards a framework for pattern experimentation: Understanding empirical validity in requirements engineering patterns. Requirements Patterns (RePa), 2012 IEEE Second International Workshop on. :41-47.

Despite the abundance of information security guidelines, system developers have difficulties implementing technical solutions that are reasonably secure. Security patterns are one possible solution to help developers reuse security knowledge. The challenge is that it takes experts to develop security patterns. To address this challenge, we need a framework to identify and assess patterns and pattern application practices that are accessible to non-experts. In this paper, we narrowly define what we mean by patterns by focusing on requirements patterns and the considerations that may inform how we identify and validate patterns for knowledge reuse. We motivate this discussion using examples from the requirements pattern literature and theory in cognitive psychology.

2014-11-26
Denning, Dorothy E..  1976.  A Lattice Model of Secure Information Flow. Commun. ACM. 19:236–243.

This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.

The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

Harrison, Michael A., Ruzzo, Walter L., Ullman, Jeffrey D..  1976.  Protection in Operating Systems. Commun. ACM. 19:461–471.

A model of protection mechanisms in computing systems is presented and its appropriateness is argued. The “safety” problem for protection systems under this model is to determine in a given situation whether a subject can acquire a particular right to an object. In restricted cases, it can be shown that this problem is decidable, i.e. there is an algorithm to determine whether a system in a particular configuration is safe. In general, and under surprisingly weak assumptions, it cannot be decided if a situation is safe. Various implications of this fact are discussed.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.

The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

2014-12-10
Robling Denning, Dorothy Elizabeth.  1982.  Cryptography and Data Security. :414.

Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data.

Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further.

Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

Schneider, Fred B..  2000.  Enforceable Security Policies. ACM Trans. Inf. Syst. Secur.. 3:30–50.

A precise characterization is given for the class of security policies enforceable with mechanisms that work by monitoring system execution, and automata are introduced for specifying exactly that class of security policies. Techniques to enforce security policies specified by such automata are also discussed.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

Thompson, Ken.  1984.  Reflections on Trusting Trust. Commun. ACM. 27:761–763.

To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

2015-01-09
Liang Zhang, Dave Choffnes, Tudor Dumitras, Dave Levin, Alan Mislove, Aaron Schulman, Christo Wilson.  2014.  Analysis of SSL Certificate Reissues and Revocations in the Wake of Heartbleed.

Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users' security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.

In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers' private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.

Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.

2015-01-11
Heorhiadi, Victor, Fayaz, SeyedKaveh, Reiter, Michael K., Sekar, Vyas.  2014.  SNIPS: A Software-Defined Approach for Scaling Intrusion Prevention Systems via Offloading. 10th International Conference on Information Systems Security, ICISS 2014. 8880

Growing traffic volumes and the increasing complexity of attacks pose a constant scaling challenge for network intrusion prevention systems (NIPS). In this respect, offloading NIPS processing to compute clusters offers an immediately deployable alternative to expensive hardware upgrades. In practice, however, NIPS offloading is challenging on three fronts in contrast to passive network security functions: (1) NIPS offloading can impact other traffic engineering objectives; (2) NIPS offloading impacts user perceived latency; and (3) NIPS actively change traffic volumes by dropping unwanted traffic. To address these challenges, we present the SNIPS system. We design a formal optimization framework that captures tradeoffs across scalability, network load, and latency. We provide a practical implementation using recent advances in software-defined networking without requiring modifications to NIPS hardware. Our evaluations on realistic topologies show that SNIPS can reduce the maximum load by up to 10× while only increasing the latency by 2%.

2015-01-13
Soudeh Ghorbani, University of Illinois at Urbana-Champaign, Brighten Godfrey, University of Illinois at Urbana-Champaign.  2014.  Towards Correct Network Virtualization. ACM Workshop on Hot Topics in Software Defined Networks (HotSDN 2014).

In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the “mappings” between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single “big switch” abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract “big switch” to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality? We show that the answer to that question is “no” for existing virtual-to-physical mapping techniques: behavior can differ between the virtual “big switch” and the physical network, providing incorrect application-level behavior.

We also show that that those incorrect behaviors occur despite the fact that the most pervasive correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.

Won best paper award at HotSDN 2014.

Dong Jin, Illinois Institute of Technology, Yi Ning, Illinois Institute of Technology.  2014.  Securing Industrial Control Systems with a Simulation-based Verification System. ACM SIGSIM Conference on Principles of Advanced Discrete Simulation.

Today’s quality of life is highly dependent on the successful operation of many large-scale industrial control systems. To enhance their protection against cyber-attacks and operational errors, we develop a simulation-based verification framework with cross-layer verification techniques that allow comprehensive analysis of the entire ICS-specific stack, including application, protocol, and network layers.

Work in progress paper.

2015-02-23
Robert Zager, John Zager.  2013.  Combat Identification in Cyberspace.

This article discusses how a system of Identification: Friend or Foe (IFF) can be implemented in email to make users less susceptible to phishing attacks.

2015-03-03
Abbas, W., Koutsoukos, X..  2015.  Efficient Complete Coverage Through Heterogeneous Sensing Nodes. Wireless Communications Letters, IEEE. 4:14-17.

We investigate the coverage efficiency of a sensor network consisting of sensors with circular sensing footprints of different radii. The objective is to completely cover a region in an efficient manner through a controlled (or deterministic) deployment of such sensors. In particular, it is shown that when sensing nodes of two different radii are used for complete coverage, the coverage density is increased, and the sensing cost is significantly reduced as compared to the homogeneous case, in which all nodes have the same sensing radius. Configurations of heterogeneous disks of multiple radii to achieve efficient circle coverings are presented and analyzed.

Smith, Andrew, Vorobeychik, Yevgeniy, Letchford, Joshua.  2014.  Multi-Defender Security Games on Networks. SIGMETRICS Perform. Eval. Rev.. 41:4–7.

Stackelberg security game models and associated computational tools have seen deployment in a number of high- consequence security settings, such as LAX canine patrols and Federal Air Marshal Service. This deployment across essentially independent agencies raises a natural question: what global impact does the resulting strategic interaction among the defenders, each using a similar model, have? We address this question in two ways. First, we demonstrate that the most common solution concept of Strong Stackelberg equilibrium (SSE) can result in significant under-investment in security entirely because SSE presupposes a single defender. Second, we propose a framework based on a different solution concept which incorporates a model of interdependencies among targets, and show that in this framework defenders tend to over-defend, even under significant positive externalities of increased defense.

Li, Bo, Vorobeychik, Yevgeniy.  2014.  Feature Cross-Substitution in Adversarial Classification. Advances in Neural Information Processing Systems 27. :2087–2095.

The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to evade the classifiers deployed to detect them. We investigate both the problem of modeling the objectives of such adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries. In particular, we demonstrate severe shortcomings of feature reduction in adversarial settings using several natural adversarial objective functions, an observation that is particularly pronounced when the adversary is able to substitute across similar features (for example, replace words with synonyms or replace letters in words). We offer a simple heuristic method for making learning more robust to feature cross-substitution attacks. We then present a more general approach based on mixed-integer linear programming with constraint generation, which implicitly trades off overfitting and feature selection in an adversarial setting using a sparse regularizer along with an evasion model. Our approach is the first method for combining an adversarial classification algorithm with a very general class of models of adversarial classifier evasion. We show that our algorithmic approach significantly outperforms state-of-the-art alternatives.

2015-04-02
Olga Zielinska, Allaire Welk, Christopher B. Mayhorn, Emerson Murphy-Hill.  2015.  Exploring expert and novice mental models of phishing. HotSoS: Symposium and Bootcamp on the Science of Security.

Experience influences actions people take in protecting themselves against phishing. One way to measure experience is through mental models. Mental models are internal representations of a concept or system that develop with experience. By rating pairs of concepts on the strength of their relationship, networks can be created through Pathfinder, showing an in-depth analysis of how information is organized. Researchers had novice and expert computer users rate three sets of terms related to phishing. The terms were divided into three categories: prevention of phishing, trends and characteristics of phishing attacks, and the consequences of phishing. Results indicated that expert mental models were more complex with more links between concepts. Specifically, experts had sixteen, thirteen, and fifteen links in the networks describing the prevention, trends, and consequences of phishing, respectively; however, novices only had eleven, nine, and nine links in the networks describing prevention, trends, and consequences of phishing, respectively. These preliminary results provide quantifiable network displays of mental models of novices and experts that cannot be seen through interviews. This information could provide a basis for future research on how mental models could be used to determine phishing vulnerability and the effectiveness of phishing training.

Yufan Huang, Xiaofan He, Huaiyu Dai.  2015.  Poster: Systematization of Metrics in Intrusion Detection Systems. ACM Proc. Of the Symposium and Bootcamp on the Science of Security (HotSoS), University of Illinois at Urbana-Champaign, IL.
Allaire K. Welk, Christopher B. Mayhorn.  2015.  All Signals Go: Investigating How Individual Differences Affect Performance on a Medical Diagnosis Task Designed to Parallel a Signal Intelligence Analyst Task. Symposium and Bootcamp on the Science of Security (HotSoS).

Signals intelligence analysts play a critical role in the United States government by providing information regarding potential national security threats to government leaders. Analysts perform complex decision-making tasks that involve gathering, sorting, and analyzing information. The current study evaluated how individual differences and training influence performance on an Internet search-based medical diagnosis task designed to simulate a signals analyst task. The implemented training emphasized the extraction and organization of relevant information and deductive reasoning. The individual differences of interest included working memory capacity and previous experience with elements of the task, specifically health literacy, prior experience using the Internet, and prior experience conducting Internet searches. Preliminary results indicated that the implemented training did not significantly affect performance, however, working memory significantly predicted performance on the implemented task. These results support previous research and provide additional evidence that working memory capacity influences performance on cognitively complex decision-making tasks, whereas experience with elements of the task may not. These findings suggest that working memory capacity should be considered when screening individuals for signals intelligence positions. Future research should aim to generalize these findings within a broader sample, and ideally utilize a task that directly replicates those performed by signals analysts.

2015-04-04
Munindar P. Singh.  2015.  Norms as a Basis for Governing Sociotechnical Systems: Extended Abstract. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI). :1–5.

We understand a sociotechnical system as a microsociety in which autonomous parties interact with and about technical objects.  We define governance as the administration of such a system by its participants.  We develop an approach for governance based on a computational representation of norms.  Our approach has the benefit of capturing stakeholder needs precisely while yielding adaptive resource allocation in the face of changes both in stakeholder needs and the environment.  In current work, we are extending this approach to tackle some challenges in cybersecurity.

Extended abstract appearing in the IJCAI Journal Abstracts Track

Munindar P. Singh.  2015.  Cybersecurity as an Application Domain for Multiagent Systems. Proceedings of the 14th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS).

The science of cybersecurity has recently been garnering much attention among researchers and practitioners dissatisfied with the ad hoc nature of much of the existing work on cybersecurity. Cybersecurity offers a great opportunity for multiagent systems research.  We motivate cybersecurity as an application area for multiagent systems with an emphasis on normative multiagent systems. First, we describe ways in which multiagent systems could help advance our understanding of cybersecurity and provide a set of principles that could serve as a foundation for a new science of cybersecurity. Second, we argue how paying close attention to the challenges of cybersecurity could expose the limitations of current research in multiagent systems, especially with respect to dealing with considerations of autonomy and interdependence.

2015-04-07
Titus Barik, Arpan Chakraborty, Brent Harrison, David L. Roberts, Robert St. Amant.  2013.  Modeling the Concentration Game with ACT-R. The 12th International Conference on Cognitive Modeling.

This paper describes the development of subsymbolic ACT-R models for the Concentration game. Performance data is taken from an experiment in which participants played the game un- der two conditions: minimizing the number of mismatches/ turns during a game, and minimizing the time to complete a game. Conflict resolution and parameter tuning are used to implement an accuracy model and a speed model that capture the differences for the two conditions. Visual attention drives exploration of the game board in the models. Modeling re- sults are generally consistent with human performance, though some systematic differences can be seen. Modeling decisions, model limitations, and open issues are discussed. 

Ignacio X. Dominguez, Alok Goel, David L. Roberts, Robert St. Amant.  2015.  Detecting Abnormal User Behavior Through Pattern-mining Input Device Analytics. Proceedings of the 2015 Symposium and Bootcamp on the Science of Security (HotSoS-15).
Robert St. Amant, Prairie Rose Goodwin, Ignacio Dominguez, David L. Roberts.  2015.  Toward Expert Typing in ACT-R. Proceedings of the 2015 International Conference on Cognitive Modeling (ICCM 15).
Yufan Huang, Xiaofan He, Huaiyu Dai.  2015.  Poster: Systematization of Metrics in Intrusion Detection Systems. ACM Proc. Of the Symposium and Bootcamp on the Science of Security (HotSoS), University of Illinois at Urbana-Champaign, IL.