Visible to the public Review of HotSoS 2015

Review of the 2015 Symposium and Bootcamp on the Science of Science of Security

Overview & Plenary Presentations | Research Paper Sessions | Tutorials

OVERVIEW -- Interest in cybersecurity science and research heats up at HotSoS 2015

The 2015 Symposium and Bootcamp on the Science of Security (HotSoS) was held April 21-22 at the University of Illinois at Urbana-Champaign National Center for Supercomputing Applications.  This third annual conference brought together researchers from numerous disciplines seeking a methodical, rigorous scientific approach to identifying and removing cyber threats. Part of the Science of Security effort, the HotSoS goal is to understand how computing systems are designed, built, used, and maintained with an understanding of their security issues and challenges.  It seeks not only to put scientific rigor into research, but also to identify the scientific value and underpinnings of cybersecurity.

Dave Nicol, UIUC lead was the affable host of Hot SOS 2015David Nicol, Director of the Illinois Trust Institute and co-PI for the Illinois Science of Security Lablet, was conference chair.  Introducing the event, he called for participants to interact and share ideas, thoughts, and questions about the nature of security and the nascent science that is emerging. Kathy Bogner, Intelligence Community Coordinator for Cybersecurity Research, represented the NSA sponsor and welcomed the group, noting the government’s long term interest and commitment to their work.   She challenged them to continue to address cybersecurity using strong scientific principles and methods and to share the fruits of that work.  She cited the numbers of universities and individual collaborators engaged in Science of Security research as an indication of activity and growth in the field. 

Mike Reiter smiles at an audience member’s remarkMike Reiter, Lawrence M. Slifkin Distinguished Professor of Computer Science, University of North Carolina, delivered the keynote “Is it Science or Engineering? A Sampling of Recent Research.”  He said interest in a "Science of Security" is confusing to many researchers, in part due to a lack of clarity about what this "science" should be like and how it should differ from principled engineering. To help clarify the distinction, he described recent research projects about large-scale measurement, attack development, human-centric design, network defense, and provable cryptography to assess which ones, if any, constitute "science".  A lively debate ensued. Pictured at the right, Mike Reiter smiles at an audience member’s remark.

Jonathan Spring, Researcher and Analyst for the CERT Division, Software Engineering Institute, Carnegie Mellon University, spoke on “Avoiding Pseudoscience in the Science of Security.”  In his view, we seek the philosophical underpinnings to science of security in an effort to avoid pseudoscience. We look at the philosophy of science to describe how "observation and reasoning from results" differs between computing and other sciences due to the engineered elements under study. He demonstrated the challenges in avoiding pseudoscience and some solutions with a case study of malware analysis.

Prof. McDaniel asks “why don’t we wear amulets to protect against car accidents?” in addressing measurement.Patrick McDaniel, Professor of Computer Science and Director of the Systems and Internet Infrastructure Security Laboratory, Penn State University, addressed “The Importance of Measurement and Decision Making to a Science of Security.” A “science” is based on a reasoned modification to a system or environment in response to a functional, performance, or security need. His talk highlighted activities surrounding the Cyber-Security Collaborative Research Alliance, five universities working in collaboration with the Army Research Lab. Another lively debate ensued. The picture on the left captures Prof. McDaniel asking “why don’t we wear amulets to protect against car accidents?” in addressing measurement.

Tutorials and a workshop were conducted with concurrent paper presentations.  Five tutorials covered socialnetwork analysis; human behavior; policy-governed secure collaboration, security-metrics-driven evaluation, design, development and deployment; and resilient architectures.  The workshop focused on analyzing papers from the security literature to determine how completely authors describe their research methods. Pictured here is Dusko Pavlovic, U. of Hawai’i, who was both animated and stimulating.


Allaire Welk, NC State, addresses methods of learning for Signals Intelligence analysts.Mike Reiter smiles at an audience member’s remarkThirteen researchers from the United Kingdom and the United States presented individual papers on studies about signal intelligence analyst tasks, detecting abnormal user behavior, tracing cyber-attack analysis processes, vulnerability prediction models, preemptive intrusion detection, enabling forensics, global malware encounters, workflow resiliency, sanctions, password policies, resource-bounded systems integrity assurance, active cyber defense and science of trust. Allaire Welk (left picture), NC State, addresses methods of learning for Signals Intelligence analysts. Ignacio X. Dominguez (right), NC State, listens to a question about his work on input device analytics.

The 2013 Best Scientific Cybersecurity Paper was an invited paper.  Chang Liu of the University of Maryland presented “Memory Trace: Oblivious Program Execution for Cloud Computing.”

Next year’s HotSoS will be held in Pittsburgh, Pennsylvania and will be hosted by Carnegie Mellon University’s Science of Security Lablet.  Professor William Scherlis will chair the event.



Papers presented during the research paper sessions covered a range of scientific issues related to the five hard problems of cybersecurity—scalability and composability, measurement, policy-governed secure collaboration, resilient architectures, and human behavior. The individual presentations are described below. They will be published in an upcoming ACM conference publication.

Integrity Assurance in Resource-Bounded Systems through Stochastic Message Authentication
Aron Laszka, Yevgeniy Vorobeychik, and Xenofon Koutsoukos

Assuring communication integrity is a central problem in security. The presenters propose a formal game-theoretic framework for optimal stochastic message authentication, providing provable integrity guarantees for resource-bounded systems based on an existing MAC scheme. They use this framework to investigate attacker deterrence, optimal design of stochastic message authentication schemes, and provide experimental results on the computational performance of their framework in practice.

Active Cyber Defense Dynamics Exhibiting Rich Phenomena
Ren Zheng, Wenlian Lu, and Shouhuai Xu

The authors explore the rich phenomena that can be exhibited when the defender employs active defense to combat cyber attacks. This study shows that active cyber defense dynamics (or more generally, cybersecurity dynamics) can exhibit bifurcation and chaos phenomena that have implications for cyber security measurement and prediction. First, that it is infeasible (or even impossible) to accurately measure and predict cyber security under certain circumstances, and second, that the defender must manipulate the dynamics to avoid unmanageable situations in real-life defense operations.

Towards a Science of Trust
Dusko Pavlovic

This paper explores the idea that security is not just a suitable subject for science, but that the process of security is also similar to the process of science. This similarity arises from the fact that both science and security depend on the methods of inductive inference. Because of this dependency, a scientific theory can never be definitely proved, but can only be disproved by new evidence and improved into a better theory. Because of the same dependency, every security claim and method has a lifetime, and always eventually needs to be improved.

Challenges with Applying Vulnerability Prediction Models
Patrick Morrison, Kim Herzig, Brendan Murphy, and Laurie Williams

The authors address vulnerability prediction models (VPM) as a basis for software engineers to prioritize precious verification resources to search for vulnerabilities. The goal of this research is to measure whether vulnerability prediction models built using standard recommendations perform well enough to provide actionable results for engineering resource allocation. They define "actionable" in terms of the inspection effort required to evaluate model results. They conclude VPMs must be refined to achieve actionable performance, possibly through security-specific metrics.

Preemptive Intrusion Detection: Theoretical Framework and Real-World Measurements
Phuong Cao, Eric Badger, Zbigniew Kalbarczyk, Ravishankar Iyer, and Adam Slagell

This paper presents a framework for highly accurate and preemptive detection of attacks, i.e., before system misuse. Using security logs on real incidents that occurred over a six-year period at the National Center for Supercomputing Applications (NCSA), the authors evaluated their framework. The data consisted of security incidents that were only identified after the fact by security analysts. The framework detected 74 percent of attacks, and the majority them were detected before the system misuse. In addition, six hidden attacks were uncovered that were not detected by intrusion detection systems during the incidents or by security analysts in post-incident forensic analyses.

Enabling Forensics by Proposing Heuristics to Identify Mandatory Log Events
Jason King, Rahul Pandita, and Laurie Williams

Software engineers often implement logging mechanisms to debug software and diagnose faults. These logging mechanisms need to capture detailed traces of user activity to enable forensics and hold users accountable. Techniques for identifying what events to log are often subjective and produce inconsistent results. This study helps software engineers strengthen forensic-ability and user accountability by systematically identifying mandatory log events through processing of unconstrained natural language software artifacts; and then proposing empirically-derived heuristics to help determine whether an event must be logged.

Modelling User Availability in Workflow Resiliency Analysis
John C. Mace, Charles Morisset, and Aad van Moorsel

Workflows capture complex operational processes and include security constraints that limit which users can perform which tasks. An improper security policy may prevent certain tasks being assigned and may force a policy violation. Tools are required that allow automatic evaluation of workflow resiliency. Modelling user availability can be done in multiple ways for the same workflow. Finding the correct choice of model is a complex concern with a major impact on the calculated resiliency. The authors describe a number of user availability models and their encoding in the model checker PRISM, used to evaluate resiliency. They also show how model choice can affect resiliency computation in terms of its value, memory, and CPU time.

An Empirical Study of Global Malware Encounters
Ghita Mezzour, Kathleen M. Carley, and L. Richard Carley

The authors empirically test alternative hypotheses about factors behind international variation in the number of trojans, worm, and virus encounters using Symantec Anti-Virus (AV) telemetry data collected from more than 10 million Symantec global customer computers. They used regression analysis to test for the effect of computing and monetary resources, web browsing behavior, computer piracy, cyber security expertise, and international relations on international variation in malware encounters and found that trojans, worms, and viruses are most prevalent in Sub-Saharan African and Asian countries. The main factor that explains the high malware exposure of these countries is widespread computer piracy, especially when combined with poverty.

An Integrated Computer-Aided Cognitive Task Analysis Method for Tracing Cyber-Attack Analysis Processes
Chen Zhong, John Yen, Peng Liu, Rob Erbacher, Renee Etoty, and Christopher Garneau

Cyber-attack analysts are required to process large amounts of network data and to reason under uncertainty to detect cyber-attacks. Capturing and studying the fine-grained analysts’ cognitive processes helps researchers gain deep understanding of how they conduct analytical reasoning and elicit their procedure knowledge and experience to further improve their performance. To conduct cognitive task analysis studies in cyber-attack analysis, the authors proposed an integrated computer-aided data collection method for cognitive task analysis (CTA) with three building elements: a trace representation of the fine-grained cyber-attack analysis process, a computer tool supporting process tracing and a laboratory experiment for collecting traces of analysts’ cognitive processes in conducting a cyber-attack analysis task.

All Signals Go: Investigating How Individual Differences Affect Performance on a Medical Diagnosis Task Designed to Parallel a Signals Intelligence Analyst Task
Allaire K. Welk and Christopher B. Mayhorn

Signals intelligence analysts perform complex decision-making tasks that involve gathering, sorting, and analyzing information. This study aimed to evaluate how individual differences influence performance in an Internet search-based medical diagnosis task designed to simulate a signals analyst task. Individual differences included working memory capacity and previous experience with elements of the task, prior experience using the Internet, and prior experience conducting Internet searches. Results indicated that working memory significantly predicted performance on this medical diagnosis task and other factors were not significant predictors of performance. These results provide additional evidence that working memory capacity greatly influences performance on cognitively complex decision-making tasks, whereas experience with elements of the task may not. These findings suggest that working memory capacity should be considered when screening individuals for signals intelligence analyst positions.

Detecting Abnormal User Behavior Through Pattern-mining Input Device Analytics
Ignacio X. Domínguez, Alok Goel, David L. Roberts, and Robert St. Amant

This paper presents a method for detecting patterns in the usage of a computer mouse that can give insights into user's cognitive processes. The authors conducted a study using a computer version of the Memory game (also known as the Concentration game) that allowed some participants to reveal the content of the tiles, expecting their low-level mouse interaction patterns to deviate from those of normal players with no access to this information. They then trained models to detect these differences using task-independent input device features. The models detected cheating with 98.73% accuracy for players who cheated or did not cheat consistently for entire rounds of the game, and with 89.18% accuracy for cases in which players enabled and then disabled cheating within rounds.

Understanding Sanction under Variable Observability in a Secure, Collaborative Environment
Hongying Du, Bennett Narron, Nirav Ajmeri, Emily Berglund, Jon Doyle, and Munindar P. Singh

Many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and the observability of the sanctioner in a secure, collaborative environment using a simulation consisting of agents maintaining “compliance” to enforced security norms while remaining “motivated” as researchers. They tested whether delayed observability of the environment would lead to greater motivation of agents to complete research tasks than immediate observability, and if sanctioning a group for a violation would lead to greater compliance to security norms than sanctioning an individual. They found that only the latter hypothesis is supported.

Measuring the Security Impacts of Password Policies Using Cognitive Behavioral Agent-Based Modeling
Vijay Kothari, Jim Blythe, Sean W. Smith, and Ross Koppel

Agent-based modeling can serve as a valuable asset to security personnel who wish to better understand the security landscape within their organization, especially as it relates to user behavior and circumvention. The authors argue in favor of cognitive behavioral agent-based modeling for usable security, report on their work on developing an agent-based model for a password management scenario, perform a sensitivity analysis, which provides them with valuable insights into improving security, and provides directions for future work.



Tutorial 1: Social Network Analysis for Science of Security
Kathleen Carley, Carnegie Mellon University

The tutorial provided a brief introduction to the area of network science, covering analytics and visualization. Dr. Carley described the core ideas, most common metrics, critical theories, and an overview of key tools. She drew illustrative examples from three security-related issues: insider threat analysis, resilient organizational designs, and global cyber-security attacks.

Tutorial 2: Understanding and Accounting for Human Behavior
Sean Smith, Dartmouth College and Jim Blythe, University of Southern California

Since computers are machines, it's tempting to think of computer security as purely a technical problem. However, computing systems are created, used, and maintained by humans and exist to serve the goals of human and institutional stakeholders. Consequently, effectively addressing the security problem requires understanding this human dimension. The presenters discussed this challenge and the principal research approaches to it.

Tutorial 3: Policy-Governed Secure Collaboration
Munindar Singh, North Carolina State University

The envisioned Science of Security can be understood as a systemic body of knowledge with theoretical and empirical underpinnings that inform the engineering of secure information systems. The presentation addressed the underpinnings pertaining to the hard problem of secure collaboration, approaching cybersecurity from a sociotechnical perspective and understanding systems through the interplay of human behavior with technical architecture on the one hand and social architecture on the other. The presentation emphasized the social architecture and modeled it in terms of a formalization based on organizations and normative relationships. Dr. Singh described how norms provide a basis for specifying security requirements at a high level, a basis for accountability, and a semantic basis for trust. He concluded the presentation by providing some directions and challenges for future research, including formalization and empirical study.

Tutorial 4: Security-Metrics-Driven Evaluation, Design, Development and Deployment
William Sanders, University of Illinois at Urbana-Champaign

Making sound security decisions when designing, operating, and maintaining a complex system is a challenging task. Analysts need to be able to understand and predict how different factors affect overall system security. During system design, security analysts want to compare the security of multiple proposed system architectures. After a system is deployed, analysts want to determine where security enhancement should be focused by examining how the system is most likely to be successfully penetrated. Additionally, when several security enhancement options are being considered, analysts would like to evaluate the relative merit of each. In each of these scenarios, quantitative security metrics should provide insight on system security and aid security decisions. The tutorial provided a survey of existing quantitative security evaluation techniques and described new work being done at the University of Illinois at Urbana-Champaign in this field.

Tutorial 5: Resilient Architectures
Ravishankar Iyer, University of Illinois at Urbana-Champaign

Resilience brings together experts in security, fault tolerance, human factors, and high integrity computing for the design and validation of systems that are expected to continue to deliver critical services in the event of attacks and failures. The tutorial highlighted issues and challenges in designing systems that are resilient to both malicious attacks and accidental failures, provided both cyber and cyber-physical examples, and concluded by addressing the challenges and opportunities from both a theoretical and practical perspective.


Note: Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.