Abstract
This project addresses the foundational problem of knowledge within cyber-physical systems (CPS), i.e., systems that combine aspects such as communication, computation, and physics. A single system observes its environment through sensors and interacts through actuators. Neither is perfect. Thus, the CPS's internal view of the world is blurry and its actions are imprecise. CPS are still analyzed with methods that do not distinguish between truth in the world and an internal view thereof, resulting in a mismatch between the behavior of theoretical models and their real-world counterparts. How could they be trusted to perform safety-critical tasks? This project addresses this critical insufficiency by developing methods to reason about knowledge and learning in CPS. The project pursues the development of new logical principles for verifying knowledge-aware CPS. This project investigates how to make the mismatch between the world and the partial perception through sensors explicit and how to achieve provably correct control in theory as well as practice despite this mismatch. By investigating changing knowledge in a changing world, this project contributes to a fundamental feature without which CPS can never be truly safe and efficient at the same time. The project's broader significance and importance are a result of the widespread attention that CPS gain in many safety-critical areas, such as in aviation and automotive industries. One reason for safety gaps in such CPS is that formal verification techniques are still largely knowledge-agnostic, and verifiable solutions overly pessimistic. This project addresses these issues and provides tools that allow for incorporating knowledge about the environment's intentions into the models to derive provably correct, but justifiably optimistic, and thus efficient, behavior. By their logical nature, these techniques are applicable to a wide range of CPS and, thus, contribute significantly to numerous applications. Results obtained within this project will be demonstrated in CPS models and laboratory robot scenarios, and will be shared in courses and with industrial partners.
The technical approach that this project pursues develops a new modeling language, logic, and proof calculus for verifying knowledge-aware CPS. The knowledge paradigm used allows CPS controllers to seamlessly acquire knowledge about the world but also about other agents in the system, i.e., other controllers. Knowledge is the key to interactions between different agents. This project investigates how an explicit model of world perception and agent intentions - and knowledge of these perceptions and intentions - allows CPS agents to act, based on more efficient, but still provably safe control in multi-agent scenarios. The methods will be implemented in the verification tool KeYmaera and demonstrated in formal verification on different case study applications such as car scenarios.
Andre Platzer
André Platzer is a Professor of Computer Science at Carnegie Mellon University, Pittsburgh, PA, USA. He develops the Logical Foundations of Cyber-Physical Systems (NSF CAREER). In his research, André Platzer works on logic-based verification and validation techniques for various forms of cyber-physical systems, including hybrid systems, distributed hybrid systems, and stochastic hybrid systems. He developed differential dynamic logic and differential invariants and leads the development of the CPS verification tool KeYmaera X.
André Platzer received an ACM Doctoral Dissertation Honorable Mention Award, an NSF CAREER Award, and was named one of the Brilliant 10 Young Scientists by the Popular Science magazine 2009 and one of the AI's 10 to Watch 2010 by the IEEE Intelligent Systems Magazine.
Performance Period: 01/01/2015 - 12/31/2017
Institution: Carnegie-Mellon University
Sponsor: National Science Foundation
Award Number: 1446712