Any-horizon uniform random sampling and enumeration of constrained scenarios for simulation-based formal verification
Mancini
T.
author
Melatti
I.
author
Tronci
E.
author
2021
Model-based approaches to the verification of non-terminating Cyber-Physical Systems (CPSs) usually rely on numerical simulation of the System Under Verification (SUV) model under input scenarios of possibly varying duration, chosen among those satisfying given _constraints_. Such constraints typically stem from _requirements_ (or _assumptions_) on the SUV inputs and its _operational environment_ as well as from the enforcement of _additional conditions_ aiming at, e.g., _prioritising_ the (often extremely long) verification activity, by, e.g., focusing on scenarios explicitly exercising _selected_ requirements, or avoiding </i>vacuity</i> in their satisfaction. In this setting, the possibility to _efficiently sample at random_ (with a known distribution, e.g., uniformly) within, or to efficiently _enumerate_ (possibly in a uniformly random order) scenarios among those satisfying all the given constraints is a key enabler for the practical viability of the verification process, e.g., via simulation-based statistical model checking. Unfortunately, in case of non-trivial combinations of constraints, iterative approaches like Markovian random walks in the space of sequences of inputs in general _fail_ in extracting scenarios according to a given distribution (e.g., uniformly), and can be _very inefficient_ to produce at all scenarios that are both legal (with respect to SUV assumptions) and of interest (with respect to the additional constraints). For example, in our case studies, up to 91% of the scenarios generated using such iterative approaches would need to be neglected. In this article, we show how, given a set of constraints on the input scenarios succinctly defined by multiple _finite memory monitors_, a data structure (_scenario generator_) can be synthesised, from which _any-horizon scenarios_ satisfying the input constraints can be _efficiently_ extracted by (possibly uniform) random sampling or (randomised) enumeration. Our approach enables _seamless support to virtually all simulation-based approaches to CPS verification_, ranging from simple random testing to statistical model checking and formal (i.e., exhaustive) verification, when a suitable bound on the horizon or an iterative horizon enlargement strategy is defined, as in the spirit of bounded model checking.
To appear
exported from refbase (http://mclab.di.uniroma1.it/publications/show.php?record=191), last updated on Mon, 06 Sep 2021 19:27:54 +0200
text
http://mclab.di.uniroma1.it/publications/papers/mancini/2021/191_Mancini_etal2021.pdf
10.1109/TSE.2021.3109842
Mancini_etal2021
MCLab @ davi @ ref9527998
IEEE Transactions on Software Engineering
2021
continuing
periodical
academic journal
1
1939-3520