Visible to the public Wigmore: A Constrain-Based Language for Reasoning About Evidence and Uncertainty

Historically, probability theory has proven to be very useful in dealing with uncertainty, especially when it can be quantified by statistical means. This is why the literature on the subject often distinguishes between risk, which applies to situations where uncertainty can be captured by a probability, and ambiguity, when there exists uncertainty without meaningful probabilities.
In the cybersecurity realm, we are often dealing with great amounts of uncertainty, and it is our experience that in this domain, we are dealing with events that are better characterized by ambiguity, not risk, primarily due to the fact that the adversary is not best modeled as a natural, stochastic process, but rather, as a sentient, learning entity.

We are interested in creating software tools to reason about this kind of uncertainty in order to support effective decision-making in the cyber domain, and our work is inspired by a field of research called 'Belief Functions', which is in turn based on the well-known Dempster-Shafer (D-S) theory. Roughly, the difference between D-S theory and traditional probabilistic approaches such as Bayesian networks is that D-S theory is concerned with combining strength of evidence, not about the updating of probabilities. Existing belief function methods typically consist of just a small number of evidence combination operators. While these operators are useful, our needs for adversarial reasoning include not just the aggregation of data as evidence, but also the aggregation of defender's beliefs.

In this talk, we will discuss the design of Wigmore: a language that we have designed (named John Henry Wigmore, a pioneer in the visualization of complex evidence chains) to address these needs. We have augmented existing belief function operators and D-S theory concepts to produce an expressive, constraint-based language that allows operators to express a rich set of beliefs about the combination of ambiguous pieces of evidence. In this presentation, we will introduce the language, and work through some illustrative examples to show how it can be used to make sense of evidence when conditions of uncertainty prevail.

Mr. Burke is a Principal Investigator at Galois with over 20 years of experience in the application of statistical and mathematical modeling, machine learning, and AI techniques to problems in the natural and social sciences, with a specialization in generalized Bayesian techniques for reasoning under uncertainty. He received a M.S. in Computer Science from the Oregon Graduate Institute, and a B.S.M.E. from Lehigh University. Since joining Galois in 2004, his work has included conducting research into logics for reasoning about trust in the design of secure systems, techniques for ensuring robust decision-making in multi-agent systems, and the application of bio-inspired approaches to machine learning and network security. His recent experience includes a PI role on several DoD-funded projects focused on counterdeception, adversarial reasoning, and decision support systems. Mr. Burke is a U.S. citizen.

Creative Commons Attribution-NonCommercial 3.0

Other available formats:

Wigmore: A Constrain-Based Language for Reasoning About Evidence and Uncertainty
Switch to experimental viewer