NNV Demo: A Neural Network Verification Tool

Submitted by Diego Manzanas on

Diego Manzanas Lopez is an electrical engineering PhD student at Vanderbilt University. He is part of the VeriVITAL research group at the Vanderbilt University Institute for Software Integrated Systems. His work as a graduate assistant consists of collaborating on several projects based on the verification and validation of cyber physical systems (CPS).

ABSTRACT

View Slides | Download

NNV (Neural Network Verification) is a frame-work for the verification of deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS) inspired by a collection of reachability algorithms that make use of a variety of set representations such as the star set. NNV supports exact and over-approximate reachability algorithms used to verify the safety and robustness of feed-forward neural networks(FFNNs). These two analysis schemes are also used for learning enabled CPS, i.e., closed-loop systems, and particularly in neural network control systems with linear models and FFNN controllers with piecewise-linear activation functions. Additionally, NNV supports over-approximate analysis for nonlinear plant models by combining the star set analysis used for FFNNs with the zonotope-based analysis for nonlinear plant dynamics provided by CORA. This demo paper demonstrates NNV’scapabilities by considering a case study of the verification of a learning-enabled adaptive cruise control system.