Monitoring Techniques for Safety Critical Cyber-Physical Systems

Abstract:

Correct functioning of cyber-physical systems is of critical importance. This is more so in the case of safety critical systems such as in medical or automotive applications. Since verification of correctness, in general, is infeasible and testing is exhaustive, it is of critical importance to monitor such system during their operation and detect erroneous behaviors to be acted on. The problem of designing monitors for safety critical cyber-physical systems is challenging since the correctness property to be monitored is specified on the probabilistic evolution of system state over time which the monitor cannot directly observe. The probabilities or randomness in the evolution of the system is due to uncertainties introduced by noise or other unpredictable events, such as component failures. The monitor observes the inputs and outputs of the system. By using these information, the monitor needs to decide whether the system execution is correct or not. The project has so far introduced two models for specifying the semantics of such cyber-physical systems. The first model is the Hidden Markov System in which the states of the system are modeled as discrete states after quantization. For such systems the property to be monitored is specified by an automaton on infinite strings. We defined two accuracy measures of a given monitor – acceptance and rejection accuracies. The accuracies capture percentage of false alarms and missed alarms, respectively. Using these concepts we defined two notions, called strong monitorability and monitorability. We gave exact characterizations when a system is strongly monitorable and monitorable with respect to a property. Based on these notions we developed techniques for monitoring, when the system to be monitored is specified by a probabilistic hybrid automaton and the property to be monitored is given by a deterministic hybrid automaton. We explored the time complexity of the monitor with respect to safety property. A subclass of monitorable systems called exponentially converging monitorable systems was introduced. An expression for the expected time for detection of a failure is presented. It’s also shown that strongly monitorable systems and monitorable systems that are bounded and uniform are exponentially converging monitorable. Experimental results show that existing algorithms do exhibit fast convergence when these conditions are satisfied. We formulated two design approaches for the monitor implementation. Threshold based monitor runs particle filtering for the state estimation and once the probability of the failure is larger than specified threshold the alarm is raised. Second approach implements cost based monitor design. For this method actions of the monitor are weighted according to reward function. Expected projected cost of raising alarm and continuing execution is calculated. Once the expectation of cumulative reward for raising the alarm is larger the failure is reported. In order to implement cost based monitor design we have used the Partially Observable Markov Decision Processes (POMDP) model. POMDP conveniently represents the monitor based on the states of product automaton and specified reward cost function. The classical definition of the POMDP requires finite set of states and observations. Therefore quantization is performed. The solution of the POMDP is represented as a policy specifying the action to be taken in particular state belief. However the exact solution of the POMDP is known to computationally intractable. We have used the Partially Observable Monte Carlo (POMCP) algorithm to calculate online suboptimal policy. POMCP does not require explicitly defined probability tables, and instead estimates non-deterministic behavior using black-box representation of the system as a generative function. POMCP employs Monte Carlo Tree Search (MCTS) to explore the most promising future trajectories. The belief state is propagated using unweighted particle filter. POMCP quickly calculates online policy, which contains decisions only for reachable state beliefs. Effectiveness of the monitor was shown using both simulated example and real-time execution on the mobile robot.

References
1. A. Sistla, M. Zefran, and Y. Feng.  Monitorability of stochastic dynamical systems.  In Computer Aided Verification, pages 720–736. Springer, 2011
2. A. Sistla, M. Zefran, and Y. Feng.  Runtime monitoring of stochastic cyber-physical systems with hybrid state.  In Runtime Verification, pages 276–293. Springer, 2012
3. A. P. Sistla, M. Zefran, Y. Feng, and Y. Ben. Timely monitoring of partially observable stochastic systems. In Proceedings of the 17th international conference on Hybrid systems: computation and control, pages 61–70. ACM, 2014

Tags:
License: CC-2.5
Submitted by Andrey Yavolovsky on