Innovations driven by recent progress in artificial intelligence (AI) have demonstrated human-competitive performance. However, as research expands to safety-critical applications, such as autonomous vehicles and healthcare treatment, the question of their safety becomes a bottleneck for the transition from theories to practice. Safety-critical autonomy must go through a rigorous evaluation before massive deployment. They are unique in the sense that failures may cause serious consequences, thus requiring an extremely low failure rate. This means that test results under naturalistic conditions are extremely imbalanced - with the failure cases being rare. The rarity, together with the complex AI structures, poses a huge challenge to design effective evaluation methods that cannot be adequately addressed by conventional methods.

This proposal aims to understand the fundamental challenges in assessing the risk of safety-critical AI autonomy and puts forward new theories and practical tools to develop certifiable, implementable, and efficient evaluation procedures. The specific aims of this research are to develop evaluation methods for three types of AI autonomy that cover a broad array of real-world applications: deep learning systems, reinforcement learning systems, and sophisticated systems comprising sub-modules, and validate them with the sensing and decision-making systems of real-world autonomous systems. This research lays the foundation for the PI?s long-term career goal to safely deploy AI in the physical world, opens up a new cross-cutting area to develop rigorous and efficient evaluation methods, addresses the urgent societal concern with the upcoming massive deployment of AI autonomy, and train a diverse, globally competitive workforce through education at all levels.
 

Off
Carnegie Mellon University
-
Anne Dyson Submitted by Anne Dyson on November 7th, 2023
Subscribe to 2047454