Mutually Stabilized Correction in Physical Demonstration Poster.pdf

pdf

How much should a person be allowed to interact with a controlled machine? If that machine
is easily destabilized, and if the controller operating it is essential to its operation, the answer may be that
the person should not be allowed any control authority at all. Using a combination of techniques coming
from machine learning, optimal control, and formal verification, the proposed work focuses on a computable
notion of trust that allows the embedded system to assess the safety of instruction.

For the majority of real-world cyber-physical systems, intuitive procedures for designing controllers
are needed. One such approach is to teach a system its control policy via demonstration; however, the
control behaviors produced by this data-driven technique are unable to be verified for feasibility or stability.
In contrast, sophisticated stability analysis is possible for control behaviors derived via optimal control,
including measures of both performance and robustness, but such formulations are rarely intuitive and often
require substantial levels of mathematical and software training to implement.
The proposed work creates a synergy between intuitive design interfaces for a physical system and the
formal verification that control provides. In particular, the proposed approach will derive control behaviors
using optimal control while simultaneously engaging a human operator to provide physical guidance for
adaptation via corrective demonstration. A fundamental technical challenge lies in the fact that the operator
may well destabilize a system that has to operate in the physical world, subject to dynamics and sources
of uncertainty; moreover, the risk to the system changes from one operator to another. The developed
controllers will be verified for stability and robustness, and a formal measure of trust in the operator will
be used decide whether to cede control to the operator during physical correction. Hence, how aggressively
an operator may run a system will explicitly depend on the system’s assessment of that operator’s past
performance.

The proposed work will significantly impact cyber-physical systems for which (i) control authority is
shared between the human and machine, (ii) the machine automation is adaptable by and able to receive instruction
from a human who is not an automation expert, and (iii) there are physical, possibly destabilizing,
interactions between the human and machine. The goals of the proposed system align strongly with many
goals central to the cyber-physical systems community, including: Interaction and potential interference
among CPS and humans, by explicitly reasoning about when to cede control authority to a human operator,
and when to request instruction for stability assistance. Cross-disciplinary collaborative research, by
building a synergy between the areas of data-driven machine learning and formal control theory. Jointly
modeling the interaction of both cyber and physical components, by taking steps to quantify the level of understanding
needed by the human to provide effective corrections, and by explicitly computing the system’s
understanding of the consequences of physical or interaction during instruction. Incorporating CPS science
into education, by incorporating CPS-centric coverage in the Control of Mobile Robotics MOOC taught by
co-PI Egerstedt.

The assessment of the work will be the demonstration of a person training rigid body systems without
any direct interaction with a computer, but with cues from the system about stability and trust. The proposal
has only just been funded, and has no experimental results to report as of yet. The proposed work draws from
a combination of numerical methods, systems theory, machine learning, and human-machine interfaces, and
makes the significant contribution of building a bridge between two previously unconnected areas that are
both of crucial importance to cyber-physical systems: human instruction and provably stable control.

Tags:
License: CC-2.5
Submitted by Todd Murphey on