Visible to the public CMU SoS Lablet Quarterly Executive Summary - April 2023Conflict Detection Enabled

A. Fundamental Research
High level report of result or partial result that helped move security science forward-- In most cases it should point to a "hard problem". These are the most important research accomplishments of the Lablet in the previous quarter.

Jonathan Aldrich

Blockchains have been proposed to support transactions on distributed, shared state, but hackers have exploited security vulnerabilities in existing programs. We applied user-centered design in the creation of Obsidian, a new language that uses typestate and linearity to support stronger safety guarantees than current approaches for programming blockchain systems.


Lujo Bauer

Securing Safety-Critical Machine Learning Algorithms

Public Accomplishments

 ML models have shown promise in classifying raw executable files (binaries) as malicious or benign with high accuracy. This has led to the increasing influence of ML-based classification methods in academic and real-world malware detection, a critical tool in cybersecurity. However, previous work provoked caution by creating variants of malicious binaries, referred to as adversarial examples,that are transformed in a functionality-preserving way to evade detection.

We investigated the effectiveness of using adversarial training methods to create malware classification models that are more robust to some state-of-the-art attacks. To train our most robust models, we significantly increased the efficiency and scale of creating adversarial examples to make adversarial training practical, a first for raw-binary malware detectors. We then analyzed the effects of varying the length of adversarial training and various versions of attacks to train with. We found that data augmentation does not deter state-of-the-art attacks, but using a generic gradient-guided method used in other discrete domains does improve robustness. We also showed that in most cases, models can be made more robust to malware-domain attacks by adversarially training with lower-effort versions of the same attack. In the best case, we reduced one state-of-the-art attack's success rate from 90% to 5%. We also found that training with some attacks can increase robustness to other types of attacks.

Adversarial training for raw-binary malware classifiers.

Keane Lucas, Samruddhi Pai, Weiran Lin, Lujo Bauer, Michael K. Reiter, Mahmood Sharif.

In Proceedings of the 32nd USENIX Security Symposium, August 2023. To appear.


Lorrie Cranor

Characterizing user behavior and anticipating its effects on computer security with a Security Behavior Observatory



 The SBO addresses the hard problem of "Understanding and Accounting for Human Behavior" by collecting data directly from people's own home computers, thereby capturing people's computing behavior "in the wild."


David Garlan

Model-Based Explanation For Human-in-the-Loop Security


 We have been extending the our model-based approach to self-adaptation to better be able to respond to attacks. We are doing this on two fronts: 1. Given behavioral models of a system and its environment, along with a set of user-specified deviations, our robustification method produces a redesign that is capable of satisfying a desired property even when the environment exhibits those deviations. We formulate the problem as a multi-objective optimization problem that aims to restrict the environment from causing violations. 2. Defining more fine grained graceful degradation that can weaken requirements for features rather than disabling them entirely.

Both of these approaches are compatible with our approaches to explanation, and we have been investigating how to accomplish this.


Joshua Sunshine

Security Science Research Experience for Undergraduates


The Security Science Research Experience for Undergraduates funded four students to work with Carnegie Mellon Researchers in Summer 2022:

  1. Emily Chang, University of Virginia, "picoCTF Cybersecurity & Education Research through Online Gaming," Advisors: Hanan Hibshi and Maverick Woo.
  2. Patrick May, College of Wooster, "Developer Awareness of Secure Programming Practices." Advisor: Hanan Hibshi.
  3. Lyric Sampson, Alabama A&M University, "AI Ethics in Open Source," Advisors: James Herbsleb and Laura Dabbish
  4. Daniel Verdi do Amarante, University of Richmond, "Natural Test Case Generation Using Deep Learning," Advisors: Rohan Padhye and Vincent Hellendoorn