The interdisciplinary eldertech team at the University of Missouri is dedicated to developing and evaluating technology to keep older adults functioning at high levels and living independently. We are leveraging ongoing research at a unique local eldercare facility (TigerPlace) to study active sensing and fusion using vision and acoustic sensors for the continuous assessment of a resident’s risk of falling as well as the reliable detection of falls in the home environment. This is part of a larger effort in identifying and assessing health problems at a very early stage so that early interventions can be offered to alleviate catastrophic health problems.
Investigate adaptive, active, anonymized vision sensing for monitoring elders in a home setting
Investigate adaptive acoustic sensing for monitoring elders in a home setting
Investigate adaptive sensor fusion and intelligent decision making using heterogeneous sensor data collected at varying time scales, including both quantitative and qualitative data, and incorporating risk factors
Evaluate the effectiveness of the monitoring system in a realistic physical environment with variable conditions
The project seeks to advance the state of the art in (1) active vision sensing for activity recognition in dynamic and unpredictable environments, (2) acoustic sensing in unstructured environments, (3) adaptive sensor fusion and decision making using heterogeneous sensor data in dynamic and unpredictable environments, and (4) automatic fall detection and fall risk assessment using non-wearable sensors. The project offers an example of a cyber physical system in which we are studying the interplay of anomaly detection (falls) and the risk factors affecting the likelihood of the anomaly event.
In year 3, we have been especially addressing the last objective by evaluating the effectiveness of the monitoring system in ten TigerPlace elderly apartments. Webcam and Kinect sensing systems have been installed and operate 24 hours a day, seven days a week in unstructured, dynamic environments. Silhouettes are extracted from the webcam data and the kinect depth data to form 3D models. From the many walking paths observed, the system looks for purposeful walking sequences that are good candidates for capturing gait parameters. Walking speed, stride time, and stride length are extracted from the models to represent fall risk and tracked over time to observe changes. We now have algorithms that operate in noisy, cluttered environments with variable lighting and multiple residents and visitors. An individual resident’s gait parameters are identified by looking for clusters in the feature space. Visitors appear as outliers to the cluster centers and thus can be discarded. We are now using these data to investigate new methods for measuring fall risk that correlate to standard fall risk assessment instruments collected monthly with the participating residents.
We have also continued to work on fall detection algorithms. Lab results with the acoustic, webcam, and kinect depth data are excellent. However, achieving such good results in the TigerPlace apartments has been challenging due to the clutter and the dynamic environments. Although results have been promising in these unstructured settings, there is still a tradeoff between false alarms and capturing all of the falls. In the next year, we will continue to look at opportunities for fusion to reduce the false alarm rate while retaining good sensitivity in capturing falls in real-world elderly apartments. Stunt actors are used monthly in the apartments to perform realistic elderly falls to ensure that some falls will be present for testing.
Award ID: 0931607