Quantitative Verification Benchmark Set

 |  Contributor Information  |  Adding Models  |  Updating Models  |  Adding Results

Information for Contributors

We welcome contributions that extend the Quantitative Verification Benchmark Set by adding new models, updating existing models, and adding new experimental results. Before you submit, please carefully read and follow the general instructions as well as those specific for your type of contribution below. If you have questions about contributing, contact .

How to submit

We accept contributions via as .zip files that contain the files being added or changed (and only those) as well as via GitHub pull requests. Submissions via email will be committed to our Git repository in your name; please mention your full name and the email address of your GitHub account in your email. For pull requests, we require all commits to be in your full name. Every submission should only contain a single contribution, e.g. one new model, or an update to one existing model.

Licensing

All data contributed to the benchmark set will be redistributed under the terms of the open-access CC-BY 4.0 license. Before submitting, make sure that you have the right to make your contribution publically available under this license and include a statement to this effect in your email. If you submit via pull request, send an email (from the email address associated with your GitHub account) with such a statement to before creating the pull request.

Adding New Models

We welcome contributions of new models that extend the benchmark set. Every model must be

We aim for a large and diverse collection of interesting benchmark examples. For this reason, before you decide to add a model, please consider why that model is interesting, and how it helps diversify the collection. For example:

Very briefly state this reasoning in the notes or challenge attribute of the model metadata. Once you have established that you have the right to release the model under the terms of the CC-BY 4.0 license (see above), that it satisfies the above requirements, and that it is interesting, please do the following steps to contribute it to the benchmark set:

  1. Clone the Git repository.
  2. Choose a short name for your model. Follow the short name patterns established by the existing models in the benchmark set.
  3. In the folder corresponding to the mathematical semantics of your model (e.g. benchmarks/ma for Markov automata, etc.), create a new subfolder with the exact short name for your model.
  4. Place the original model file and its JANI translation into that subfolder. Use the "short" name of the model for the file names of all model files (original and JANI) wherever possible. If a file contains hardcoded parameters, use the short name of the model, followed by a dot (.), followed by the hardcoded parameter values separated by dashes (-) in the order in which they appear on the benchmark set's website, followed by the file extension.
  5. In the same subfolder, create a file index.json containing the metadata for your model. Check the existing models for how to create the metadata file and what information to include.
  6. Add a subfolder named results and include at least one result for one parameter configuration obtained from one tool given the JANI translation as input.
  7. Add a reference to your model in file benchmarks/index.json. (Keep the alphabetical ordering in that file.)
  8. Test your additions by opening benchmarks/index.html in a browser. In particular, check that all links work, and that all data in the models table is displayed correctly.
  9. Submit via or GitHub pull request.

Updating Existing Models

Existing models within the benchmark set may be updated to correct modelling errors, add parameters, or add properties. An update should make the least amount of changes necessary to achieve its goal. These are the recommended steps to submit an update of an existing model:

  1. Clone the Git repository.
  2. Make changes to the files inside the model's subfolder.
  3. Increment the model's version number and add an entry to its version-history.
  4. Test your changes by opening benchmarks/index.html in a browser. In particular, check that all links still work, and that all data in the models table is still displayed correctly.
  5. Submit via or GitHub pull request.

Adding New Results

In addition to models, we also collect experimental results obtained by analysing the models with various tools. The purpose of making these results available is to provide reference values for the properties included in the models as well as to allow showcasing and comparing the performance of different tools. In general, the tools used should be publicly available so that all results can be independently replicated. Follow these steps to submit new results:

  1. Clone the Git repository.
  2. Add the .json files and tool output logs describing your results to the results subfolder of the corresponding model. The names of the files should be the name of the tool followed by a dot (.), followed by the tool variant used and a dot (if any), followed by all parameter values separated by dashes (-) in the order in which they appear on the benchmark set's website and a dot, followed by the date (in YYYY-MM-DD format) of the results and a dot, followed by the file extension. For example: Storm.exact.3-4-3.2018-10-03.json. Check existing results for how to create the .json file and what information to include.
  3. If your experiments provide new information about the number of states for some parameter configurations, update the model's main index.json file accordingly.
  4. Add or update the technical and performance data for the CPU used for your experiments in benchmarks/cpu-data.json.
  5. Test your changes by opening benchmarks/index.html in a browser. In particular, check that all links work.
  6. Submit via or GitHub pull request.