Learning Monitorable Operational Design Domains for Assured Autonomy

pdf

Bio

Hazem Torfah is a postdoctoral researcher in the EECS Department at UC Berkeley. He received his doctoral degree in Computer Science in December 2019 from Saarland University, Germany. His research interests are the formal specification, verification, and synthesis of cyber-physical systems. In his Ph.D., Hazem developed a quantitative theory for reactive systems based on model counting. He is one of the main developers of the RTLola monitoring framework, which has been integrated into the ARTIS fleet of unmanned aerial vehicles in close collaboration with the German Aerospace Center (DLR). Hazem's current focus is the development of quantitative methods for the explainability and runtime assurance of AI-based autonomous systems.

Abstract

AI-based autonomous systems are increasingly relying on machine learning (ML) components to perform a variety of complex tasks in perception, prediction, and control. The use of ML components is projected to grow and with it the concern of using these components in systems that operate in safety-critical settings. To guarantee the safety of ML-based autonomous systems, it is important to capture their operational design domain (ODD), i.e., the conditions under which using the ML components does not endanger the safety of the system. Building safe and reliable AI-based autonomous systems calls, therefore, for automated techniques that allow to systematically capture the ODDs of systems. We present a framework for learning runtime monitors that capture the ODDs of black-box systems. A runtime monitor of an ODD predicts based on a sequence of monitorable observations whether the system is about to exit the ODD. We particularly investigate the learning of optimal monitors based on counterexample-guided refinement and conformance testing. We evaluate our approach on a case study from the domain of autonomous driving.

Tags:
License: CC-3.0
Submitted by Jason Gigax on