Neural networks used for image classification tasks in critical applications must be tested with sufficient realistic data to assure their correctness.
Deep neural networks are revolutionizing the way complex systems are designed.
Exploration tool for Neural Network Verification (NNV) tool from Verivital Labs, Vanderbilt University.
Overt provides a relational piecewise linear over-approximation of any multi-dimensional functions. The over-approximation is useful for verifying systems with non-linear dynamics.
This tool integrates Overt and MIPVerify tools for the purpose of verifying closed-loop systems that are controlled by neural networks. Overt is a julia package that provides
PROPEL observes that neural policy representations are amenable to gradient-based learning but are hard to verify or interpret.
Learning-enabled controllers used in cyber-physical systems (CPS) are known to be susceptible to adversarial attacks. Such attacks manifest as perturbations to the states generated by the controller's environment in response to its actions.
ModelPlex uses theorem proof and differential dynamic logic to generate a safe condition offline. In the online part, such a condition can be used as a monitor to check state for model compliance and check the safety of the control.
Runtime Safety Evaluation in Autonomous Systems (ReSonAte) is a framework for estimating the dynamic risk of Autonomous Cyber-Physical Systems.
Revel is a partially neural reinforcement learning (RL) framework for provably safe exploration in continuous state and action spaces.