Formal Models of Human Control and Interaction with CPS

Abstract:

Cyber-Physical Systems (CPS) encompass a large variety of systems including example future energy systems (e.g. smart grid), homeland security and emergency response, smart medical technologies, smart cars and air transportation. One of the most important challenges in the   design and deployment of Cyber-Physical Systems is how to formally guarantee that they are amenable to effective human control. This is a challenging problem not only because of the operational changes and increasing complexity of future CPS but also because of the nonlinear nature of the human-CPS system under realistic assumptions. Current state of the art has in general produced simplified models and has not fully considered realistic assumptions about system and environmental constraints or human cognitive abilities and limitations. To overcome current state of the art limitations, our overall research goal is to develop a theoretical framework for complex human-CPS that enables formal analysis and verification to ensure stability of the overall system operation as well as avoidance of unsafe operating states. To analyze a human- CPS involving a human operator(s) with bounded rationality, we identify three key questions: (a) Are the inputs available to the operator sufficient to generate desirable behaviors for the CPS? (b) If so, how easy is it for the operator with her cognitive limitations to drive the system towards a desired behavior? (c) How can we formally identify areas of poor system performance and determine appropriate mitigations? Our overall technical approach will be to (a) develop and appropriately leverage general cognitive models that incorporate human limitations and capabilities, (b) develop methods to abstract cognitive models to yield tractable analytical human models (c) develop innovative techniques to design the abstract interface between the human and underlying system to reflect mutual constraints, and (d) extend current state-of-the-art  reachability and verification algorithms for analysis of abstract interfaces, in which one of the systems in the feedback loop (i.e., the user) is mostly unknown, uncertain, highly variable or poorly modeled.

Tags:
License: CC-2.5
Submitted by Katia Sycara on
Feedback
Feedback
If you experience a bug or would like to see an addition or change on the current page, feel free to leave us a message.
Image CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.