Visible to the public Model-Based Explanation For Human-in-the-Loop Security - April 2023Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)

Human Behavior
Resilient Architectures

We are addressing human behavior by providing understandable explanations for automated mitigation plans generated by self-protecting systems that use various models of the software, network, and attack. We are addressing resilience by providing defense plans that are automatically generated as the system runs and accounting for current context, system state, observable properties of the attacker, and potential observable operations of the defense.


Changjian Zhang, Taranj Saluja, Romulo Meira-Goes, Matthew Bolton, David Garlan and Eunsuk Kang. Robustification of Behavioral Designs against Environmental Deviations. In Proceedings of the 45th International Conference on Software Engineering, 14-20 May 2023. To appear.

Simon Chu, Emma Shedden, Changjian Zhang, Romulo Meira-Goes, Gabriel A. Moreno, David Garlan and Eunsuk Kang. Runtime Resolution of Feature Interactions through Adaptive Requirement Weakening. In Proceedings of the 18th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 15-16 May 2023. To appear.


We have been extending the our model-based approach to self-adaptation to better be able to respond to attacks. We are doing this on two fronts: 1. Given behavioral models of a system and its environment, along with a set of user-specified deviations, our robustification method produces a redesign that is capable of satisfying a desired property even when the environment exhibits those deviations. We formulate the problem as a multi-objective optimization problem that aims to restrict the environment from causing violations. 2. Defining more fine grained graceful degradation that can weaken requirements for features rather than disabling them entirely.

Both of these approaches are compatible with our approaches to explanation, and we have been investigating how to accomplish this.


David Garlan was on a panel at the International Conference on Software Architecture entitled: Educating the Next Generation of Software Architects. One of the points he emphasized was the need for incorporating explainability into software, especially for systems that incorporate some form of AI or automation.

We are continuing engagement with Lockheed Martin on Explanations for Generative Manufacturing.