Visible to the public Biblio

Found 267 results

Unpublished
Kathleen Carley.  2015.  Crisis Mapping: Big Data from a Dynamic Network Analytic Perspective.

Big data holds the promise of enabling analysts to predict, mitigate and respond rapidly to natural disasters and human crises. Rapid response is critical for saving lives and mitigating damage.  Rapid response requires understanding the socio-cultural terrain; i.e., understanding who needs what or can provide what, where, how, why and when. Today, new technologies are forging a path making use of sensor and social media data to understand the socio-cultural terrain and provide faster response during crises.  A review of this area, the state of the art, and known challenges are discussed.  The basic argument is that this technology is still in its infancy.  Critical scientific challenges, social challenges, and legal challenges will need to be addressed before the promise of big data for Crisis Mapping is fulfilled.

Limin Jia, Shayak Sen, Deepak Garg, Anupam Datta.  2015.  System M: A Program Logic for Code Sandboxing and Identification.

Security-sensitive applications that execute untrusted code often check the code’s integrity by comparing its syntax to a known good value or sandbox the code to contain its effects. System M is a new program logic for reasoning about such security-sensitive applications. System M extends Hoare Type Theory (HTT) to trace safety properties and, additionally, contains two new reasoning principles. First, its type system internalizes logical equality, facilitating reasoning about applications that check code integrity. Second, a con- finement rule assigns an effect type to a computation based solely on knowledge of the computation’s sandbox. We prove the soundness of System M relative to a step-indexed trace-based semantic model. We illustrate both new reasoning principles of System M by verifying the main integrity property of the design of Memoir, a previously proposed trusted computing system for ensuring state continuity of isolated security-sensitive applications. 

Thesis
Ghita Mezzour.  2015.  Assessing the Global Cyber and Biological Threat. Electrical and Computer Engineering Department and Institute for Software Research. Doctor of Philosophy

In today’s inter-connected world, threats from anywhere in the world can have serious global repercussions. In particular, two types of threats have a global impact: 1) cyber crime and 2) cyber and biological weapons. If a country’s environment is conducive to cyber criminal activities, cyber criminals will use that country as a basis to attack end-users around the world. Cyber weapons and biological weapons can now allow a small actor to inflict major damage on a major military power. If cyber and biological weapons are used in combination, the damage can be amplified significantly. Given that the cyber and biological threat is global, it is important to identify countries that pose the greatest threat and design action plans to reduce the threat from these countries. However, prior work on cyber crime lacks empirical substantiation for reasons why some countries’ environments are conducive to cyber crime. Prior work on cyber and biological weapon capabilities mainly consists of case studies which only focus on select countries and thus are not generalizeable. To sum up, assessing the global cyber and biological threat currently lacks a systematic empirical approach. In this thesis, I take an empirical and systematic approach towards assessing the global cyber and biological threat. The first part of the thesis focuses on cyber crime. I examine international variation in cyber crime infrastructure hosting and cyber crime exposure. I also empirically test hypotheses about factors behind such variation. In that work, I use Symantec’s telemetry data, collected from 10 million Symantec customer computers worldwide and accessed through the Symantec’s Worldwide Intelligence Network Environment (WINE). I find that addressing corruption in Eastern Europe or computer piracy in Sub-Saharan Africa has the potential to reduce the global cyber crime. The second part of the thesis focuses on cyber and biological weapon capabilities. I develop two computational methodologies: one to assess countries’ biological capabilities and one to assess countries’ cyber capabilities. The methodologies examine all countries in the world and can be used by non-experts that only have access to publicly available data. I validate the biological weapon assessment methodology by comparing the methodology’s assessment to historical data. This work has the potential to proactively reduce the global cyber and biological weapon threat.

Vangaveeti, Anoosha.  2015.  An Assessment of Security Problems in Open Source Software. Computer Science. MS

An Assessment of Security Problems in Open Source Software: Improving software security through changes in software design and development processes appears to be a very hard problem. For example, well documented security issues such as Structured Query Language injection, after more than a decade, still tops most vulnerability lists. Security priority is often subdued due to constraints such as time-to-market and resources. Furthermore, security process outcomes are hard to quantify and even harder to predict or relate to process improvement activities. In part this is because of the nature of the security faults - they are in statistical terms "rare" and often very complex compared to "regular" non-security faults. In part it is the irregular and unpredictable nature of the security threats and attacks that puts the software under attack into states it was not designed for and subjects it to what would be considered "nonoperational" use. In many cases it is the human component of the system that fails - for example, due to phishing or due to incorrect use of a software product. On the other hand, we have decades of experience developing reliable software (admittedly subject to similar resource, cost and time constraints). The central question of interest in this thesis is to what extent can we leverage some of the software reliability engineering (SRE) models, processes, and metrics that work in the "classical" operational space to develop predictive software security engineering assessment and development elements. Specific objectives are a) to investigate use of (possibly modified) SRE practices to characterize security properties of software, and b) assess how software design and development processes could be enhanced to avoid, eliminate and tolerate security problems and attacks.We are particularly interested in open source software security, the conditions under which SRE practices may be useful, and the information that this can provide about the security quality of a software product. We examined public information about security problem reports for open source Fedora and RHEL series of software releases, Chromium project and Android project. The data that we analyzed was primarily about security problems reported from post-release in-the-field use of the products. What can we learn about the non-operational processes (and possible threats) related to security problems? One aspect is classification of security problems based on the traits that contribute to the injection of problems into code, whether due to poor practices or limited knowledge (epistemic errors), or due to random accidental events (aleatoric errors). Knowing the distribution can help understand attack space and help improve development processes and testing of the next version. For example, in the case of Fedora, the distribution of security problems found post-release was consistent across two different releases of the software. The security problem discovery rate appears to be roughly constant but much lower (ten to a hundred times lower) than the initial non-security problem discovery rate. Similarly, in the case of RHEL, the distribution of security problems found post-release was consistent and the number of security problems kept decreasing across six different releases of the software. The security problem discovery rate appears to be roughly constant but again much lower than the initial non-security problem discovery rate. In the case of Chromium, the number of discovered security problems is orders of magnitude higher than for other products, except that does not appear to translate into a higher incidence of field breaches. One reason could be Chromium "bounty" for problem discovery. We find that some classical reliability models can be used as one of tools to estimate the residual number of security problems in both the current release and in the future releases of the software, and through that provide a measure of the security characteristics of the software. For example, to assess whether, under given usage conditions, security problem discovery rate is increasing or decreasing - and what that may mean. Based on our findings, we discuss an agile software testing process that combines operational and non-operational (or attack related) testing with the intent of finding and eliminating more security problems earlier in the software development process. The knowledge of vulnerable components from architectural view and the frequency of vulnerabilities in each of the components helps in prioritizing security test resources.

Ben Blum.  2012.  Landslide: Systematic Exploration for Kernel-Space Race Detection. School of Computer Science. MS:88.

Systematic exploration is an approach to finding race conditions by deterministically executing every possible interleaving of thread transitions and identifying which ones expose bugs. Current systematic exploration techniques are suitable for testing user-space programs, but are inadequate for testing kernels, where the testing framework’s control over concurrency is more complicated. We present Landslide, a systematic exploration tool for finding races in kernels. Landslide targets Pebbles, the kernel specification that students implement in the undergraduate Operating Systems course at Carnegie Mellon University (15- 410). We discuss the techniques Landslide uses to address the general challenges of kernel-level concurrency, and we evaluate its effectiveness and usability as a debugging aid. We show that our techniques make systematic testing in kernel-space feasible and that Landslide is a useful tool for doing so in the context of 15-410.

Subramani, Shweta.  2014.  Security Profile of Fedora. Computer Science. MS:105.

The process of software development and evolution has proven difficult to improve. For example,  well documented security issues such as SQL injection (SQLi), after more than a decade, still top  most vulnerability lists. Quantitative security process and quality metrics are often subdued due to  lack of time and resources. Security problems are hard to quantify and even harder to predict or  relate to any process improvement activity.  The goal of this thesis is to assess usefulness of “classical” software reliability engineering (SRE)  models in the context of open source software security, the conditions under which they may be  useful, and the information that they can provide with respect to the security quality of a software  product.  We start with security problem reports for open source Fedora series of software releases.We  illustrate how one can learn from normal operational profile about the non-operational processes  related to security problems. One aspect is classification of security problems based on the human  traits that contribute to the injection of problems into code, whether due to poor practices or limited  knowledge (epistemic errors), or due to random accidental events (aleatoric errors). Knowing the  distribution aids in development of an attack profile. In the case of Fedora, the distribution of  security problems found post-release was consistent across four different releases of the software.  The security problem discovery rate appears to be roughly constant but much lower than the initial  non-security problem discovery rate. Previous work has shown that non-operational testing can help  accelerate and focus the problem discovery rate and that it can be successfully modeled.We find  that some classical reliability models can be used with success to estimate the residual number of  security problems, and through that provide a measure of the security characteristics of the software.  We propose an agile software testing process that combines operational and non-operational (or  attack related) testing with the intent of finding more security problems faster. 

Michael Maass.  2016.  A Theory and Tools for Applying Sandboxes Effectively. Institute for Software Research, School of Computer Science. PhD Philosphy:166.

It is more expensive and time consuming to build modern software without extensive supply chains. Supply chains decrease these development risks, but typically at the cost of increased security risk. In particular, it is often difficult to understand or verify what a software component delivered by a third party does or could do. Such a component could contain unwanted behaviors, vulnerabilities, or malicious code, many of which become incorporated in applications utilizing the component. Sandboxes provide relief by encapsulating a component and imposing a security policy on it. This limits the operations the component can perform without as much need to trust or verify the component. Instead, a component user must trust or verify the relatively simple sandbox. Given this appealing prospect, researchers have spent the last few decades developing new sandboxing techniques and sandboxes. However, while sandboxes have been adopted in practice, they are not as pervasive as they could be. Why are sandboxes not achieving ubiquity at the same rate as extensive supply chains? This thesis advances our understanding of and overcomes some barriers to sandbox adoption. We systematically analyze ten years (2004 – 2014) of sandboxing research from top-tier security and systems conferences. We uncover two barriers: (1) sandboxes are often validated using relatively subjective techniques and (2) usability for sandbox deployers is often ignored by the studied community. We then focus on the Java sandbox to empirically study its use within the open source community. We find features in the sandbox that benign applications do not use, which have promoted a thriving exploit landscape. We develop run time monitors for the Java Virtual Machine (JVM) to turn off these features, stopping all known sandbox escaping JVM exploits without breaking benign applications. Furthermore, we find that the sandbox contains a high degree of complexity benign applications need that hampers sandbox use. When studying the sandbox’s use, we did not find a single application that successfully deployed the sandbox for security purposes, which motivated us to overcome benignly-used complexity via tooling. We develop and evaluate a series of tools to automate the most complex tasks, which currently require error-prone manual effort. Our tools help users derive, express, and refine a security policy and impose it on targeted Java application JARs and classes. This tooling is evaluated through case studies with industrial collaborators where we sandbox components that were previously difficult to sandbox securely. Finally, we observe that design and implementation complexity causes sandbox developers to accidentally create vulnerable sandboxes. Thus, we develop and evaluate a sandboxing technique that leverages existing cloud computing environments to execute untrusted computations. Malicious outcomes produced by the computations are contained by ephemeral virtual machines. We describe a field trial using this technique with Adobe Reader and compare the new sandbox to existing sandboxes using a qualitative framework we developed.

Report
Ryan Wagner, Matthew Fredrikson, David Garlan.  2017.  An Advanced Persistent Threat Exemplar.

Security researchers do not have sufficient example systems for conducting research on advanced persistent threats, and companies and agencies that experience attacks in the wild are reluctant to release detailed information that can be examined. In this paper, we describe an Advanced Persistent Threat Exemplar that is intended to provide a real-world attack scenario with sufficient complexity for reasoning about defensive system adaptation, while not containing so much information as to be too complex. It draws from actual published attacks and experiences as a security engineer by the authors.

Riaz, Maria, Breaux, Travis, Williams, Laurie, Niu, Jianwei.  2012.  On the Design of Empirical Studies to Evaluate Software Patterns: A Survey.

Software patterns are created with the goal of capturing expert
knowledge so it can be efficiently and effectively shared with the
software development community. However, patterns in practice
may or may not achieve these goals. Empirical studies of the use
of software patterns can help in providing deeper insight into
whether these goals have been met. The objective of this paper is
to aid researchers in designing empirical studies of software
patterns by summarizing the study designs of software patterns
available in the literature. The important components of these
study designs include the evaluation criteria and how the patterns
are presented to study participants. We select and analyze 19
distinct empirical studies and identify 17 independent variables in
three different categories (participants demographics; pattern
presentation; problem presentation). We also extract 10 evaluation
criteria with 23 associated observable measures. Additionally, by
synthesizing the reported observations, we identify challenges
faced during study execution. Provision of multiple domainspecific
examples of pattern application and tool support to assist
in pattern selection are helpful for the study participants in
understanding and completing the study task. Capturing data
regarding the cognitive processes of participants can provide
insights into the findings of the study.

Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2015.  Discovering Decision-Making Patterns for Security Novices and Experts.

Security analysis requires some degree of knowledge to align threats to vulnerabilities in information technology. Despite the abundance of security requirements, the evidence suggests that security experts are not applying these checklists. Instead, they default to their background knowledge to identify security vulnerabilities. To better understand the different effects of security checklists, analysis and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security requirements analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness theory to elicit responses that were analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend and project future threats, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We discuss suggestions of how our method could be adapted and applied to improve training and education instruments of security analysts.

Jiri Simsa, Randy Bryant, Garth Gibson, Jason Hickey.  2011.  Efficient Exploratory Testing of Concurrent Systems.

In our experience, exploratory testing has reached a level of maturity that makes it a practical and often the most cost-effective approach to testing. Notably, previous work has demonstrated that exploratory testing is capable of finding bugs even in well-tested systems [4, 17, 24, 23]. However, the number of bugs found gives little indication of the efficiency of a testing approach. To drive testing efficiency, this paper focuses on techniques for measuring and maximizing the coverage achieved by exploratory testing. In particular, this paper describes the design, implementation, and evaluation of Eta, a framework for exploratory testing of multithreaded components of a large-scale cluster management system at Google. For simple tests (with millions to billions of possible executions), Eta achieves complete coverage one to two orders of magnitude faster than random testing. For complex tests, Eta adopts a state space reduction technique to avoid the need to explore over 85% of executions and harnesses parallel processing to explore multiple test executions concurrently, achieving a throughput increase of up to 17.5×. 

Erik Zawadzki, Andre Platzer, Geoffrey Gordon.  2014.  A Generalization of SAT and #SAT for Robust Policy Evaluation.

Both SAT and #SAT can represent difficult problems in seemingly dissimilar areas such as planning, verification,  and probabilistic  inference. Here, we examine an expressive new language, #∃SAT, that generalizes both of these languages.   #∃SAT problems require counting the number of satisfiable formulas in a concisely-describable  set of existentially quantified, propositional formulas. We characterize the expressiveness and worst-case difficulty of #∃SAT by proving it is complete for the complexity  class #P NP [1], and re- lating this class to more familiar complexity  classes. We also experiment with three new

general-purpose #∃SAT solvers on a battery  of problem distributions  including  a simple logistics domain. Our experiments show that, despite the formidable worst-case complex-

ity of #P NP [1], many of the instances can be solved efficiently  by noticing and exploiting a particular type of frequent structure.

Ghita Mezzour, L. Richard Carley, Kathleen Carley.  2014.  Global Mapping of Cyber Attacks.

Identifying factors behind countries’ weakness to cyber-attacks is an important step towards addressing these weaknesses at the root level.  For example, identifying factors why some countries become cyber- crime safe heavens can inform policy actions about how to reduce the attractiveness of these countries to cyber-criminals.  Currently, however, identifying these factors is mostly based on expert opinions and speculations.

In this work, we perform an empirical study to statistically test the validity of these opinions and specu- lations.  In our analysis, we use Symantec’s World Intelligence Network Environment (WINE) Intrusion Prevention System (IPS) telemetry data which contain attack reports from more than 10 million customer computers worldwide.  We use regression analysis to test for the relevance of multiple factors including monetary and computing resources, cyber-security research and institutions, and corruption.

Our analysis confirms some hypotheses and disproves others. We find that many countries in Eastern Europe extensively host attacking computers because of a combination of good computing infrastructure and high corruption rate.  We also find that web attacks and fake applications are most prevalent in rich countries because attacks on these countries are more lucrative. Finally, we find that computers in Africa launch the lowest rates of cyber-attacks. This is surprising given the bad cyber reputation of some African countries such as Nigeria. Our research has many policy implications.

Jeffrey Gennari, David Garlan.  2012.  Measuring Attack Surface in Software Architecture.

In this report we show how to adapt the notion of “attack surface” to formally evaluate security properties at the architectural level of design and to identify vulnerabilities in architectural designs. Further we explore the application of this metric in the context of architecture-based transformations to improve security by reducing the attack surface. These transformations are described in detail and validated with a simple experiment.

Hanan Hibshi, Rocky Slavin, Jianwei Niu, Travis Breaux.  2014.  Rethinking Security Requirements in RE Research .

As information security became an increasing concern for software developers and users, requirements engineering (RE) researchers brought new insight to security requirements. Security requirements aim to address security at the early stages of system design while accommodating the complex needs of different stakeholders. Meanwhile, other research communities, such as usable privacy and security, have also examined these requirements with specialized goal to make security more usable for stakeholders from product owners, to system users and administrators. In this paper we report results from conducting a literature survey to compare security requirements research from RE Conferences with the Symposium on Usable Privacy and Security (SOUPS). We report similarities between the two research areas, such as common goals, technical definitions, research problems, and directions. Further, we clarify the differences between these two communities to understand how they can leverage each other’s insights. From our analysis, we recommend new directions in security requirements research mainly to expand the meaning of security requirements in RE to reflect the technological advancements that the broader field of security is experiencing. These recommendations to encourage crosscollaboration with other communities are not limited to the security requirements area; in fact, we believe they can be generalized to other areas of RE. 

Hibshi, Hanan, Slavin, Rocky, Niu, Jianwei, Breaux, Travis.  2014.  Rethinking Security Requirements in RE Research.

As information security became an increasing
concern for software developers and users, requirements
engineering (RE) researchers brought new insight to security
requirements. Security requirements aim to address security at
the early stages of system design while accommodating the
complex needs of different stakeholders. Meanwhile, other
research communities, such as usable privacy and security,
have also examined these requirements with specialized goal to
make security more usable for stakeholders from product
owners, to system users and administrators. In this paper we
report results from conducting a literature survey to compare
security requirements research from RE Conferences with the
Symposium on Usable Privacy and Security (SOUPS). We
report similarities between the two research areas, such as
common goals, technical definitions, research problems, and
directions. Further, we clarify the differences between these
two communities to understand how they can leverage each
other’s insights. From our analysis, we recommend new
directions in security requirements research mainly to expand
the meaning of security requirements in RE to reflect the
technological advancements that the broader field of security is
experiencing. These recommendations to encourage crosscollaboration
with other communities are not limited to the
security requirements area; in fact, we believe they can be
generalized to other areas of RE.

Forget, Alain, Komanduri, Saranga, Acquisti, Alessandro, Christin, Nicolas, Cranor, Lorrie, Teland, Rahul.  2014.  Security Behavior Observatory: Infrastructure for Long-term Monitoring of Client Machines.

Much of the data researchers usually collect about users’ privacy and security behavior comes from short-term studies and focuses on specific, narrow activities. We present a design architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from a panel of hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security and privacy of the collected data, which will inevitably include intimate details about our participants' behavior. Third, the SBO must serve our research interests, which will inevitably change over the course of the study, as collected data is analyzed, interpreted, and suggest further lines of inquiry. We describe in detail the SBO infrastructure, its secure data collection methods, the benefits of our design and implementation, as well as the hurdles and tradeoffs to consider when designing such a data collection system. - See more at: https://www.cylab.cmu.edu/research/techreports/2014/tr_cylab14009.html#sthash.vsO39UdR.dpuf

Alain Forget, Saranga Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang.  2014.  Security Behavior Observatory: Infrastructure for Longterm Monitoring of Client Machines.

Much of the data researchers usually collect about users’ privacy and security behavior comes from short-term studies and focuses on specific, narrow activities. We present a design architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from a panel of hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security and privacy of the collected data, which will inevitably include intimate details about our participants’ behavior. Third, the SBO must serve our research interests, which will inevitably change over the course of the study, as collected data is analyzed, interpreted, and suggest further lines of inquiry. We describe in detail the SBO infrastructure, its secure data collection methods, the benefits of our design and implementation, as well as the hurdles and tradeoffs to consider when designing such a data collection system. 

Presentation
Luis Caires, Jorge Perez, Frank Pfenning, Bernardo Toninho.  2013.  Logic-Based Domain-Aware Session Types.

Software services and governing communication protocols are increasingly domain-aware. Domains can have multiple interpretations, such as the principals on whose behalf processes act or the location at which parties reside. Domains impact protocol compliance and access control, two central issues to overall functionality and correctness in distributed systems. This paper proposes a session-typed process framework for domain-aware communication-centric systems based on a CurryHoward interpretation of linear logic, here augmented with nominals from hybrid logic indicating domains. These nominals are explicit in the process expressions and govern domain migration, subject to a parametric accessibility relation familiar from the Kripke semantics for modal logic. Flexible access relationships among domains can be elegantly defined and statically enforced. The framework can also account for scenarios in which domain information is discovered only at runtime. Due to the logical origins of our systems, well-typed processes enjoy session fidelity, global progress, and termination. Moreover, well-typed processes always respect the accessibility relation and satisfy a form of domain parametricity, two properties crucial to show that domain-related properties of concrete programs are satisfied. 

Jonathan Aldrich, Alex Potanin.  2016.  Naturally Embedded DSLs. Systems, Programming, Languages and Applications: Software for Humanity (SPLASH) .

Domain-specific languages can be embedded in a variety of ways within a host language. The choice of embedding approach entails significant tradeoffs in the usability of the embedded DSL. We argue embedding DSLs \textit{naturally} within the host language results in the best experience for end users of the DSL. A \textit{naturally embedded DSL} is one that uses natural syntax, static semantics, and dynamic semantics for the DSL, all of which may differ from the host language. Furthermore, it must be possible to use DSLs together naturally - meaning that different DSLs cannot conflict, and the programmer can easily tell which code is written in which language.

Miscellaneous
Radu Vanciu, Ebrahim Khalaj, Marwan Abi-Antoun.  2014.  Comparative Evaluation of Static Analyses that Find Security Vulnerabilities.

To find security vulnerabilities, many research approaches and commercial tools use a static analysis and check constraints. Previous work compared using a benchmark several approaches where the static analysis and constraints are combined, and the evaluation focused on corner cases in the Java language. We extend the comparative evaluation of these approaches to include one approach that separates the constraints from the static analysis. We also extend the benchmark to cover more classes of security vulnerabilities. Approaches that combine the static analysis and constraints work well for vulnerabilities that are sensitive to the order in which statements are executed. The additional effort required to write separate constraints is rewarded by better recall in dealing with dataflow communication and better precision for callback methods that are common in applications built on frameworks such as Android. 

Arbob Ahmad, Robert Harper.  2015.  An Epistemic Formulation of Information Flow Analysis.

Most  accounts of information flow security in pro- gramming  languages emphasize non-interference  to characterize security: in a secure program,   changes to high-security  inputs do not alter the values  of low-security  outputs. The definition of non-interference   is incompatible  with declassification, which allows some low-security  outputs to be influenced by high-security inputs. We  propose  an alternative  account of information flow based on an epistemic logic of computational effects. Rather  than view a program  as a function from inputs to outputs, we instead embrace the principle that information flow security is concerned with the effects  a program has  on its execution  environment. These effects are modelled using a substructural  epistemic logic that tracks the flow of knowledge  gained by principals  and communication  channels during execution. Confidentiality   is expressed  by proving  necessary conditions  for a principal to know a sensitive fact at the end of an execution. In the simplest case  the necessary  condition   is  falsehood,   which means  that a principal cannot  know a secret  as  a result of a well-typed execution  of a program. In  the presence  of declassification  a necessary condition  for disclosure   is  the existence  of a proof of authorization in a formal authorization  logic, expressing that sensitive data is disclosed only when explicitly  authorized.  Rather than taken  as the primary result, the classical non-interference property  arises in the proof of adequacy of the epistemic theory of disclosure, ensuring that it accurately models program  behavior. It  is  suggested  that an epistemic  account  of information  flow security is both more natural  and more expressive than classical accounts based only on non-interference.

Javier Camara, Antonia Lopes, David Garlan, Bradley Schmerl.  2014.  Impact Models for Architecture-Based Self-Adaptive Systems.

Self-adaptive systems have the ability to adapt their behavior to dynamic operation conditions. In reaction to changes in the environment, these systems determine the appropriate corrective actions based in part on information about which action will have the best impact on the system. Existing models used to describe the impact of adaptations are either unable to capture the underlying uncertainty and variability of such dynamic environments, or are not compositional and described at a level of abstraction too low to scale in terms of specification effort required for non-trivial systems. In this paper, we address these shortcomings by describing an approach to the specification of impact models based on architectural system descriptions, which at the same time allows us to represent both variability and uncertainty in the outcome of adaptations, hence improving the selection of the best corrective action. The core of our approach is an impact model language equipped with a formal semantics defined in terms of Discrete Time Markov Chains. To validate our approach, we show how employing our language can improve the accuracy of predictions used for decisionmaking in the Rainbow framework for architecture-based self-adaptation.