Visible to the public Knowledge-Aware Cyber-Physical Systems


During the development process of CPS, an analysis of whether the system operates safely in its target environment is of utmost importance. For many applications of CPS research such as the transportation industry, this implies interconnected research in formal verification of CPS with research on knowledge representation and reasoning in multi-agent systems. The need for such research has become tragically clear in transportation accidents, one notorious case being the Air France 447 flight incident. The analysis of the crash released by the French aviation authorities makes it clear that the crash occurred only partially because of mechanical failure. Indeed, the only mechanical problem was the temporary malfunction of the speed sensors. Neutral inputs would have kept the plane safe. However, the pilots, having been surprised, interpreted the data available to them differently. In CPS terms, their knowledge states differed, and were, in fact, inconsistent. One appears to have believed the plane was overspeeding, while the other that it was stalling, often caused by insufficient speed. Each pilot then performed a rational sequence of actions that would've kept the plane safe, assuming their own perception of the world to be the true one. In conjunction, their actions negated each other and resulted in the plane remaining in a stall situation until impact. In CPS terms, their differing goals in the game of keeping the plane safe resulting in a failure of the system, not of the individual pilot. So far, this project has focused on knowledge awareness. To this end, we have extended a logic capable of reasoning about CPS with knowledge reasoning capabilities. With this new logic, we are in the middling stages of multi-stage process of designing CPS that take knowledge into account, and whose agents make decisions based on the agents' perception of the world:

1. We identify and analyse, in isolation, each individual knowledge-related issue that might have contributed to the crash. The logic often allows us to home in on counter-examples that helps us understand the failure in terms of knowledge (e.g. erroneous or insufficient knowledge), and how it interacts with agent control of the system.

2. We then merge the simpler models into more complex systems where knowledge and knowledge/world interactions are less clear, and perform a similar analysis as in the previous stage.

3. We can finally redesign the CPS from the ground up to include knowledge as a core concept, with insight into the insufficiencies of the original system w.r.t. knowledge. The tangibles of designing and proving the safety of knowledge-aware CPS run a significant gamut of contributions, from redesigning cockpits to ensure the right information is available at the right time, to designing policies and emergency procedures that minimise or eliminate the risk of knowledge-related failures. With knowledge as a first-class citizen of CPS modeling languages it will be possible to realistically portray agent and controller capabilities. As the AF 447 incident shows, this is critical in bridging the gap between theoretical and practical safety of CPS.

Creative Commons 2.5

Other available formats:

Knowledge-Aware Cyber-Physical Systems
Switch to experimental viewer