Abstract
Cyber-Physical Systems (CPS) encompass a large variety of systems including for example future energy systems (e.g. smart grid), homeland security and emergency response, smart medical technologies, smart cars and air transportation. One of the most important challenges in the design and deployment of Cyber-Physical Systems is how to formally guarantee that they are amenable to effective human control. This is a challenging problem not only because of the operational changes and increasing complexity of future CPS but also because of the nonlinear nature of the human-CPS system under realistic assumptions. Current state of the art has in general produced simplified models and has not fully considered realistic assumptions about system and environmental constraints or human cognitive abilities and limitations. To overcome current state of the art limitations, our overall research goal is to develop a theoretical framework for complex human-CPS that enables formal analysis and verification to ensure stability of the overall system operation as well as avoidance of unsafe operating states. To analyze a human-CPS involving a human operator(s) with bounded rationality three key questions are identified: (a) Are the inputs available to the operator sufficient to generate desirable behaviors for the CPS? (b) If so, how easy is it for the operator with her cognitive limitations to drive the system towards a desired behavior? (c) How can areas of poor system performance and determine appropriate mitigations be formally identified? The overall technical approach will be to (a) develop and appropriately leverage general cognitive models that incorporate human limitations and capabilities, (b) develop methods to abstract cognitive models to yield tractable analytical human models (c) develop innovative techniques to design the abstract interface between the human and underlying system to reflect mutual constraints, and (d) extend current state-of-the-art reachability and verification algorithms for analysis of abstract interfaces, iin which one of the systems in the feedback loop (i.e., the user) is mostly unknown, uncertain, highly variable or poorly modeled.
The research will provide contributions with broad significance in the following areas: (1) fundamental principles and algorithms that would serve as a foundation for provably safe robust hybrid control systems for mixed human-CPS (2) methods for the development of analytical human models that incorporate cognitive abilities and limitations and their consequences in human control of CPS, (3) validated techniques for interface design that enables effective human situation awareness through an interface that ensures minimum information necessary for the human to safely control the CPS, (4) new reachability analysis techniques that are scalable and allow rapid determination of different levels of system safety. The research will help to identify problems (such as automation surprises, inadequate or excessive information contained in the user interface) in safety critical, high-risk, or expensive CPS before they are built, tested and deployed. The research will provide the formal foundations for understanding and developing human-CPS and will have a broad range of applications in the domains of healthcare, energy, air traffic control, transportation systems, homeland security and large-scale emergency response. The research will contribute to the advancement of under-represented students in STEM fields through educational innovation and outreach. The code, benchmarks and data will be released via the project website.
Formal descriptions of models of human cognition are in general incompatible with formal models of the Cyber Physical System (CPS) the human operator(s) control. Therefore, it is difficult to determine in a rigorous way whether a CPS controlled by a human operator will be safe or stable and under which circumstances. The objective of this research is to develop an analytic framework of human-CPS systems that encompasses engineering compatible formal models of the human operator that preserve the basic architectural features of human cognition. In this project the team will develop methodologies for building such models as well as techniques for formal verification of the human-CPS system so that performance guarantees can be provided. They will validate models in a variety of domains ranging from air traffic control to large scale emergency response to the administration of anesthesia.
Meeko Oishi
Meeko Oishi received the Ph.D. (2004) and M.S. (2000) in Mechanical Engineering from Stanford University (Ph.D. minor, Electrical Engineering), and a B.S.E. in Mechanical Engineering from Princeton University (1998). She is a Professor of Electrical and Computer Engineering at the University of New Mexico. Her research interests include human-centric control, stochastic optimal control, and autonomous systems. She previously held a faculty position at the University of British Columbia at Vancouver, and postdoctoral positions at Sandia National Laboratories and at the National Ecological Observatory Network. She was a Visiting Researcher at AFRL Space Vehicles Directorate, and a Science and Technology Policy Fellow at The National Academies. She is the recipient of the NSF CAREER Award and a member of the 2021-2023 DoD Defense Science Study Group.
Performance Period: 09/15/2013 - 08/31/2016
Institution: University of New Mexico
Sponsor: National Science Foundation
Award Number: 1329878