Adaptive Intelligence for Cyber-Physical Automotive Active Safety- System Design and Evaluation

pdf

Objective: The objective of this project is to improve the performance and current capabilities of automotive active safety control systems by taking into account the interactions between the driver, the vehicle, the active safety system and the environment. The current approach in the design of automotive active safety systems follows the philosophy "one size fits all," in the sense that active safety systems are the same for all vehicles and do not take into account the skills, habits and state of the human driver who may operate the vehicle. 

Research Approach: In order to provide customization and personalization of automotive active safety systems we will utilize driver models that can predict human driver state and driving skills using recorded data. In order to achieve our main research objective in this project we plan to leverage recent advances in the area of probabilistic graphical models and machine learning algorithms to train these models from data. Specifically, we will develop algorithms to estimate the driver’s skills and current state of attention from eye movement data, together with dynamic motion cues obtained from steering and pedal inputs. We will then inject this information into the active safety system operation in order to enhance its performance via the application of recent results from the theory of adaptive and real-time, model-predictive optimal control. Finally, the correct level of autonomy and workload distribution between the driver and the active safety system will ensure that no conflicts arise between the driver and the control system and the safety and passenger comfort are not compromised. 

This year we developed a new saliency model that fuses many state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para- boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.

In terms of control design, this year we proposed a new hybrid driver model that combines the reactive, continuous control layer of a human driver with an anticipative, periodically updated layer that captures driver intent.  We have modified the well-known sensorimotor two-point visual control model by adding a model predictive control module to better predict deliberative driving actions of a human driver.  We have evaluated the performance of this new model via numerical simulations and have shown that the model reacts to the variation of the direction angle similarly to human drivers. 

Experimental Validation: During this year, we have made great strides in developing the world’s first fully instrumented vehicle + driver recording system. This system will allow us to simultaneously record vehicle dynamics and user behavior, moving us significantly beyond our already popular combined eye+video+action recording system which we have previously used with driving video games. The new system will allow us to non-intrusively and safely record these behaviors while driving real vehicles.  In order to validate our theory a flexible integrated driving simulator is also under construction at Georgia Tech that uses a realistic car physics engine and through a combination of CarSim and Simulink it creates a realistic driving experience. The car data is sent via ROS messages to be rendered in Unity 3D for an interactive car simulation display with negligible latency. Using a physical simulation platform equipped with a car seat, steering wheel, pedals and gearshift, the car simulator allows for a user to interact with a test track with on-ramps, off-ramps, and large curves while it records their performance for further evaluation. 

 

Tags:
License: CC-2.5
Submitted by Panagiotis Tsiotras on