Despite claims in popular media, current ?self-driving? and advanced driver assist systems (ADAS), based on purely data-driven, machine learning algorithms may still suffer from catastrophic failures. This tendency of ?theoretical statistical accuracy? but ?demonstrated fragility in practice? makes current deep learning algorithms unsuitable for use within feedback loops for safety-critical, cyber-physical applications such as assisted or unsupervised self-driving cars in traffic. Regardless of these shortcomings, it is certain that automation and autonomy will play a crucial role in future mobility solutions, either for personally owned or shared-mobility vehicles; and regardless of the level of automation, at least in the foreseeable future, the driver should be in the loop. There is currently a need to quantify the impact of the human driver within the autonomy loop, both from an individual experiential perspective, as well as in terms of safety. In addition, the next generation of ?self-driving? or ?driver-assist? systems should be able to sense, learn and anticipate driver?s habits, skills and adapt accordingly, thus making driving more intuitive and safer at the same time. How to best integrate the driver?s learning goals and preferences in a transparent manner to enhance the ?driving experience? without sacrificing safety requires further work, however.

The main objective of this research is to utilize techniques and models from reinforcement learning and formal methods to develop the next generation of ADAS that can accommodate the driver preferences and habits with safety constraints. The aim is to increase the performance and safety guarantees of deep neural network architectures operating within a feedback loop that includes the driver by: a) using redundant architectures that blend model-free and model-based processing pipelines; and b) adding safety guarantees both during training and during execution by leveraging recent advances of formal methods for safety-critical applications. Specifically, the technique consists of learning a state prediction model to estimate the internal reward function of the driver using a novel neural network architecture, accompanied by a federated, lifelong learning approach to identify heterogeneous driver preferences and goals. The proposed approach will further add a layer of safety and robustness by incorporating the neural network architecture with a differentiable Signal Temporal Logic (STL) framework to meet temporal safety constraints, and will meet with an additional safety layer using a run-time assurance (RTA) mechanism that combines reachability analysis with a monitoring approach to ensure that system cannot be steered to unsafe conditions. The proposed framework will be validated and tested in two stages. The first stage will involve simulations and experiments on several non-trivial problems using high-fidelity driving simulation platforms such as CARLA. The second stage will conduct human-in-the-loop experiments using a driving simulator developed at Georgia Tech. The research will involve both graduate and undergraduate students. The results of this research will be disseminated to the community by journal and conference publications, organization of invited workshops and seminar presentations, and by targeted exposure (press releases, interviews) to popular media.

Off
Georgia Institute of Technology
-
National Science Foundation
Jason Gigax Submitted by Jason Gigax on November 13th, 2023
Subscribe to 2219755