QComp 2019

The 2019 Comparison of Tools for the Analysis of Quantitative Formal Models (QComp 2019) is the first friendly competition among verification and analysis tools for quantitative formal models. It is part of the TACAS 2019 Toolympics. Drawing its benchmarks from the Quantitative Verification Benchmark Set, it compares the performance, versatility and usability of the participating tools.

Organisers: Arnd Hartmanns (University of Twente) and Tim Quatmann (RWTH Aachen)

Read more...

Quantitative Verification Benchmark Set

The Quantitative Verification Benchmark Set is a collection of probabilistic models – Markov models, fault trees, stochastic Petri nets and others – to serve as a benchmark set for the benefit of algorithm and tool developers as well as the foundation of QComp 2019. The benchmark set's models are

Every model is accompanied by a set of properties: queries for quantities of interest to be computed by analysis tools. The benchmark set collects models with queries for unbounded and bounded reachability probabilities, for unbounded and bounded variants of various forms of expected accumulated rewards, and for steady-state values. Alongside its various models, the repository also provides tool results: property values and performance data obtained by applying different tools to the models across different hardware configurations. Throughout, all data necessary for replicability is available, for example the command lines used to convert the models from their original modelling language into JANI or the precise hardware and runtime configuration that a set of results was obtained in. The benchmark set is hosted on GitHub and accepts contributions.

Browse the benchmark set...