Trinity implements a model-based quantitative assurance framework for autonomous systems that uses predictive coding (PC). A top-down PC model is learned to predict inputs given a context, in addition to the typical bottom-up perception models used in autonomous systems. An input’s context in a machine learning model is obtained from the input’s spatial and temporal context, identification of invariances and symmetries, as well as model’s explanations for the decision. A quantitative measure of surprise is developed based on the mismatch between the observations and the predictions by this top-down PC model. Trinity uses this computed surprise to determine when an autonomous system has low assurance and is operating outside its training domain. We have used Trinity for out-of-distribution detection, detecting novel classes, adversarial attack detection, and uncertainty-aware control.
Keywords: Trust in ML, Equivariance, OOD Detection, Novelty Detection