Abstract
In the world of smart systems, a captivating real-time fusion occurs where digital technology meets the physical world. This synergy has been significantly transformed by the integration of artificial intelligence (AI), a move that, while dramatically enhancing system capabilities, also introduces a layer of complexity, presenting new challenges in ensuring their safety, reliability, and accuracy. Unfortunately, there is a significant gap in the analysis and comprehension of these advanced systems, and many state-of-the-art techniques fall short of verifying their correctness, especially compared to their effectiveness in traditional systems. This project addresses the challenges of testing Cyber Physical Systems (CPS) with integrated AI components, using an approach to find counterexamples by means of simulation. It addresses critical scientific needs for analyzing the current adoptions of AI algorithms within smart systems. This research has the potential to develop an open-source falsification tool, accompanied by a dataset package, encouraging more innovations in the field of model-based verification. Upon successful completion of this project, many of the tools can potentially be adapted to various other applications that include AI-driven systems. In this project, PIs also aim to bridge existing gaps by fostering collective initiatives with academic and industrial partners. The project expands the learning experience by creating new project-based courses on software verification for smart systems and AI. It further enriches learning through mentoring undergraduate and graduate students to solve real-world software verification problems and encourages the involvement of more female students in research activities.<br/><br/>Building upon recent research and preliminary results, the project highlights significant gaps in benchmarking and in-depth analysis of AI-enabled CPS. It also addresses the need to improve the detection capabilities of state-of-the-art techniques specifically for these systems. In its first phase, the project investigates current adoptions of AI-enabled CPS and their verification practices through benchmarking and categorization of CPS models, as well as a large-scale empirical assessment of existing falsification methods. The empirical study aims to identify the limitations and challenges of applying existing falsification methods to AI-driven CPS models. The study also explores potential correlations between various model components and the effectiveness of falsification techniques. The insights from this phase will provide CPS practitioners with a deeper understanding of the inherent limitations in existing software verification practices. In its second phase, the project develops and evaluates an innovative approach that combines Deep Reinforcement Learning and Stochastic Optimization. This approach is designed to identify high-risk areas in the input domain leading to system property violations and apply effective decision-making within these regions. The optimization process serves to identify the input areas most prone to violate the system properties. The identified high-risk regions are integrated into the decision-making policy of the reinforcement learning algorithm, which attempts to locate the most critical falsifying traces that violate the specified property. The goal of this approach is to reduce the number of executions required for isolating high-risk input regions, thereby increasing the likelihood of detecting the most critical violations. This endeavor will improve the safety and reliability of AI-driven systems across various real-world applications.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Performance Period: 03/01/2024 - 02/28/2026
Award Number: 2347294