Visible to the public Model-Based Explanation for Automated Decision Making


As systems become increasingly autonomous a crucial capability is to be able to understand not only what the autonomous behavior is, but also why it is being recommended or acted on. To address this problem we are developing mechanisms, theories and tools for “explanation” of autonomous behavior that is based on planning using models such as Markov decision processes. In this talk we will report on recent results in three areas (a) providing “contrastive” explanations to improve users’ understanding about why actions are being taken, as well as improve users' ability to detect erroneous behavior, (b) interactive interfaces that allow a user to pose “what if” questions, and (c) ways to understand the consequences of selecting alternative “reward” functions on which automated planning is based.


David GarlanDavid Garlan is a Professor of Computer Science and Associate Dean in the School of Computer Science at Carnegie Mellon University. His research interests include software architecture, self-adaptive and autonomous systems, formal methods, and cyber-physical systems. He is recognized as one of the founders of the field of software architecture, and in particular, formal representation and analysis of architectural designs. He has received a Stevens Award Citation for “fundamental contributions to the development and understanding of software architecture as a discipline in software engineering,” an Outstanding Research award from ACM SIGSOFT for “significant and lasting software engineering research contributions through the development and promotion of software architecture,” an Allen Newell Award for Research Excellence, an IEEE TCSE Distinguished Education Award, and a Nancy Mead Award for Excellence in Software Engineering Education. He is a Fellow of the IEEE and ACM.

Creative Commons 2.5

Other available formats:

Model-Based Explanation for Automated Decision Making
Switch to experimental viewer