Presentations

file

Visible to the public Model Checking Memory Safety of Industrial Code

ABSTRACT

file

Visible to the public Cybersecurity Threat Landscape

file

Visible to the public Trustworthy AI

Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, healthcare, hiring, housing, human resource management, law enforcement, and public safety, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted to be correct, reliable, fair, and safe, especially under adversarial attacks.

file

Visible to the public Work-in-Progress: Attack Scenarios (title not shown in full)

This presentation is part of the Works-in-Progress session, which aims to provide authors with early feedback to adjust on-going research. Manuscript titles are redacted until the work has been published.

 

file

Visible to the public Work-in-Progress: Efficacy of Phishing (title not shown in full)

This presentation is part of the Works-in-Progress session, which aims to provide authors with early feedback to adjust on-going research. Manuscript titles are redacted until the work has been published.

file

Visible to the public Flexible Mechanisms for Remote Attestation

Remote attestation consists of generating evidence of a system’s integrity via measurements and reporting the evidence to a remote party for appraisal in a form that can be trusted. The parties that exchange information must agree on formats and protocols. We assert there is a large variety of patterns of interactions among appraisers and attesters of interest. Therefore, it is important to standardize on flexible mechanisms for remote attestation.

file

Visible to the public Inconsistencies in Specification of Intel TDX Remote Attestation

Intel Trust Domain Extensions (TDX) is the upcoming trusted execution environment offering of Intel. One of the most critical processes of Intel TDX is the remote attestation mechanism. In this talk, we expose some of the discrepancies in Intel’s specifications of remote attestation that may potentially lead to design and implementation flaws. We explain how formal specification and verification using ProVerif could help avoid these flaws.
 

file

Visible to the public Consent as a Foundation for Responsible Autonomy

This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time. That is, it considers settings where decision making by agents impinges upon the outcomes perceived by other agents. For an agent to act responsibly, it must accommodate the desires and other attitudes of its users and, through other agents, of their users. The contribution of this paper is twofold.

file

Visible to the public Verified Cryptographic Code for Everybody

Along with Amazon Web Services, Galois is publishing a new paper titled “Verified Cryptographic Code for Everybody,” and we really do mean everybody. One benefit of the way we’ve done this work is that everyone who was using the code already is now using verified code. The library we’ve verified parts of is called AWS-LibCrypto. Much of that library is code that originated from BoringSSL, and BoringSSL is composed of code originating from OpenSSL.

file

Visible to the public Work-in-Progress: Cyber Emulation Experiments (title not shown in full)

This presentation is part of the Works-in-Progress session, which aims to provide authors with early feedback to adjust on-going research. Manuscript titles are redacted until the work has been published.