HCSS 2017

file

Visible to the public HCSS-2017[3].pdf

file

Visible to the public Allowing Bounded Leakage in Secure Computation: A New Application of Differential Privacy

Abstract:

file

Visible to the public A Trustable Autonomous Systems Lifecycle

The military requires flexible unmanned cyber-physical systems that can exhibit autonomous decision making and both obey rules of engagement and operate within a verifiable behavior safety envelope. We currently lack methods to provide assurance that such systems will operate reliably and with integrity in their operating environment as they continue to learn how to adapt to new situations. We have developed an architecture and an autonomous systems verification and validation approach based, in part, on the new discipline of software intent specifications.

file

Visible to the public Keynote Presentation: Symmetries in Software

ABSTRACT:

file

Visible to the public Keynote Presentation: Formal Methods and the Defense Industrial Base

ABSTRACT

file

Visible to the public Keynote Presentation - John Launchbury

Abstract:

file

Visible to the public Verified Data Structures for Trusted Autonomy: A Compilation Approach

ABSTRACT

Problem statement.

file

Visible to the public Certified Multiplicative Weights Update, or Verified Learning Without Regret

ABSTRACT

The HCSS'17 Call asks how existing techniques can be used to build assurance cases for AI- and machine-learning-based systems. In this talk, I demonstrate that for certain methods in reinforcement learning, the subarea of AI in which an agent attempts to optimize a strategy over time against an adversarial environment, conventional tools such as interactive theorem provers suffice to prove both strong functional correctness and complexity guarantees.