This project addresses the foundational problem of knowledge within cyber-physical systems (CPS), i.e., systems that combine aspects such as communication, computation, and physics. A single system observes its environment through sensors and interacts through actuators. Neither is perfect. Thus, the CPS's internal view of the world is blurry and its actions are imprecise. CPS are still analyzed with methods that do not distinguish between truth in the world and an internal view thereof, resulting in a mismatch between the behavior of theoretical models and their real-world counterparts. How could they be trusted to perform safety-critical tasks? This project addresses this critical insufficiency by developing methods to reason about knowledge and learning in CPS. The project pursues the development of new logical principles for verifying knowledge-aware CPS. This project investigates how to make the mismatch between the world and the partial perception through sensors explicit and how to achieve provably correct control in theory as well as practice despite this mismatch. By investigating changing knowledge in a changing world, this project contributes to a fundamental feature without which CPS can never be truly safe and efficient at the same time. The project's broader significance and importance are a result of the widespread attention that CPS gain in many safety-critical areas, such as in aviation and automotive industries. One reason for safety gaps in such CPS is that formal verification techniques are still largely knowledge-agnostic, and verifiable solutions overly pessimistic. This project addresses these issues and provides tools that allow for incorporating knowledge about the environment's intentions into the models to derive provably correct, but justifiably optimistic, and thus efficient, behavior. By their logical nature, these techniques are applicable to a wide range of CPS and, thus, contribute significantly to numerous applications. Results obtained within this project will be demonstrated in CPS models and laboratory robot scenarios, and will be shared in courses and with industrial partners. The technical approach that this project pursues develops a new modeling language, logic, and proof calculus for verifying knowledge-aware CPS. The knowledge paradigm used allows CPS controllers to seamlessly acquire knowledge about the world but also about other agents in the system, i.e., other controllers. Knowledge is the key to interactions between different agents. This project investigates how an explicit model of world perception and agent intentions - and knowledge of these perceptions and intentions - allows CPS agents to act, based on more efficient, but still provably safe control in multi-agent scenarios. The methods will be implemented in the verification tool KeYmaera and demonstrated in formal verification on different case study applications such as car scenarios.
Off
Carnegie-Mellon University
-
National Science Foundation
Andre Platzer Submitted by Andre Platzer on December 22nd, 2015
Subscribe to 1446712