This approach first learns a recurrent network that represents an implicit distribution over future feature values of a dynamic process. The features to be predicted could be raw observations or derived quantities and the time horizon of the prediction is an algorithm parameter. In addition to just predicting a feature value distribution at a future time, the approach can be parameterized to predict the accumulation of a feature over a future horizon.The learned distribution is conditioned on observations up to the current time through the hidden state of the recurrent network and can be used to generate samples from the predictive distribution. The set of samples produced for a time h steps in the future are used to generate an anomaly score based on the actual observation h steps in the future. The type of anomaly signal is a parameter of the approach and include k-nearest neighbors, local outlier factor, and isolation forest. 


This work is supported in part by the  DARPA Assured Autonomy  program.


Mohamad Danesh (Oregon State University)