Quantitative Visual Sensing of Dynamic Behaviors for Home-based Progressive Rehabilitation
Abstract:
Human motion-capture and computational analysis tools have played a significant role in a variety of product-design and ergonomics settings for over a quarter-century. In moving beyond traditional kinematic (and its dual-static) settings, advances in biomechanics and multibody dynamics have led up to computational analysis tools that can provide significant insights into the functional performance. Such tools now offer the ability to perform numerous what-if type analyses to help virtually-evaluate scenarios, thereby providing enormous cost- and time-savings. However, there exist significant differences in the capabilities and ease-of-use between these tools, necessitating a careful evaluation. Hence, in our work, we perform comparative analysis of motion data from two alternate human motion-capture systems (Vicon vs Kinect) processed using state-of-the-art computational- analysis systems (AnyBody Modeling System/Visual-3D). The quantitative evaluation of a clinically relevant task (squatting) facilitates an objective evaluation of functional performance including effects of motion capture fidelity and the role of pre- /post-processing (calibration, latent dynamics estimation). Knee bracing has been used to realize a variety of functional outcomes in both sport and rehabilitation application. Much of the literature focuses on the effect of knee misalignment, force reduction and superiority of custom braces over commercial over-the-counter braces. Efforts on developing exoskeletons to serve as knee augmentation systems emphasize actuation of joints, which then adds to bulkiness of ensuing designs. In lieu of this, we would like to employ a semi-active augmentation approach (by addition of springs and dampers). Such an approach serves to redirect power (motions and forces) to achieve the desired functional outcomes from the knee braces. However, the suitable selection of geometric dimensions of the brace and spring parameters to achieve desired motion- and force-profiles at the knee remains a challenge. We therefore introduce a two-stage kinetostatic design process to help customize the brace to match a desired kinematic/static performance. Alternate paradigms for human-worn smart-brace exoskeletons build upon the synergy of legged- and wheeled robots and exploit these capabilities to realize significant advantages (improved stability, obstacle surmounting capability, enhanced robustness) over both traditional wheeled-and/or legged-systems in a range of uneven-terrain locomotion applications. We examine the possibility of further enhancing capabilities by: (i) proposing an “adjustable four-bar” articulated-leg-wheel exoskeleton; (ii) with active-structural control to actively change subsystem parameters during motions. Multiple leg-wheel design- parameters can affect the peak-static torque requirements as well as dynamic-bandwidth requirements for the leg-wheel actuation. For the cyber components, we have successfully used the Kinect sensor to capture synchronized depth and color information about the human dynamic behavior. The information obtained from such behavioral tracking can be utilized to perform human pose estimation and convert the visual dynamics into a parametric and composable low-dimensional manifold representation. The manifold representation is intended to act as a link space between different levels of modeling viz. visual sensing data at the low level and the musculoskeletal model at the middle level. We assume that the human pose and visual dynamics are embedded in a low-dimensional manifold space, in which the nonlinear geometric structure can be estimated and assembled by many adjacent locally Euclidean spaces. We have advanced our research by proposing a class of low-rank and sparse modeling of manifold and subspace frameworks including submanifold decomposition, low-rank, latent and discriminative tensor completion, robust low-rank subspace discovery, low-rank coding, one-class classification, low-rank transfer subspace learning, and applications of these techniques to analyzing spatial-temporal patterns of human motion, action, and activity, 3D hand-gesture recognition, expression animation by motion capture, and learning relative features for visual coding.