Learning Control Sharing Strategies for Assistive Cyber-physical Systems
Assistive machines, like robot arms mounted on powered wheelchairs, promote independence and
ability in those with severe motor impairments. Users of these assistive robots can teleoperate their
devices to perform activities of daily living like eating and self care. However, as these machines
become more capable, they often also become more complex. Traditional teleoperation interfaces
cover only a portion of the control space, and it is necessary to switch between di erent control
modes to access the full control space.
We propose a new paradigm for controlling these complex assistive Cyber-Physical Systems
(CPSs) via simple low-dimensional control interfaces that are accessible to persons with severe
motor impairments, like 2-D joysticks or 1-D Sip-N-Pu interfaces. To do so, we leverage techniques
from robot autonomy to anticipate when to switch between di erent control modes. This approach
is a departure from the majority of control sharing approaches within assistive domains, which either
partition the control space and allocate di erent portions to the robot and human, or augment the
human's control signals to bridge the dimensionality gap. How to best share control within assistive
domains remains an open question, and an appealing characteristic of our approach is that the user
is kept maximally in control since their signals are not altered or augmented.
We introduce a formalism for assistive mode-switching that is grounded in hybrid dynamical
systems theory, and aims to ease the burden of teleoperating high-dimensional assistive robots.
By modeling this CPS as a hybrid dynamical system, we model assistance as optimization over a
desired cost function. We also model the system's uncertainty over the user's goals via a POMDP.
This model provides the natural sca olding for learning user preferences.
In this poster, we report our research toward identifying a cost function for mode switching.
First, we identify the time and energy impact of mode switching during teleoperation for typical
daily tasks. In a user study with able-bodied people, we found that about 15% of task time is spent
just on mode switching. We also report research toward learning when to switch modes based on
two types of demonstration data: direct control signals of the robot, and indirect signals in the form
of eye gaze. An SVM trained on robot motion trajectories obtained from human demonstrations
was able to successfully predict between two control modes: translation and rotation. From initial
eye gaze data, we developed a model that predicts user intent, which could be applied toward
automated mode switching assistance.
Our work has the potential for signi cant public health impact by increasing the independence of
people with severe motor impairments and/or paralysis. Our collaborations with top rehabilitation
hospitals (the Rehabilitation Institute of Chicago, ranked #1 rehabilitation hospital in the US)
and with the manufacturers of these assistive devices (Kinova Robotics, makers of the JACO
robotic arm) enables us to have direct impact on the population that would most bene t from this
technology.