PRISM

Benchmark
Model:resource-gathering v.2 (MDP)
Parameter(s)B = 1300, GOLD_TO_COLLECT = 100, GEM_TO_COLLECT = 100
Property:expgold (exp-reward-step-bounded)
Invocation (default)
./fix-syntax ./prism --javamaxmem 11g resource-gathering.pm resource-gathering.prctl --property expgold -const B=1300,GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100
Default settings.
Execution
Walltime:36.3676540851593s
Return code:0
Relative Error:4.756139514895092e-15
Log
PRISM
=====

Version: 4.4.dev
Date: Tue Dec 11 01:15:14 CET 2018
Hostname: qcomp2019
Memory limits: cudd=1g, java(heap)=1g
Command line: prism --javamaxmem 11g resource-gathering.pm_fixed resource-gathering.prctl_fixed --property expgold -const 'B=1300,GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100'

Parsing model file "resource-gathering.pm_fixed"...

Parsing properties file "resource-gathering.prctl_fixed"...

3 properties:
(1) "expgold": R{"rew_gold"}max=? [ C<=B ]
(2) "expsteps": R{"time_reward"}min=? [ F "success" ]
(3) "prgoldgem": Pmax=? [ F<=B "success" ]

Type:        MDP
Modules:     robot goldcounter gemcounter 
Variables:   gold gem attacked x y required_gold required_gem 

---------------------------------------------------------------------

Model checking: "expgold": R{"rew_gold"}max=? [ C<=B ]
Model constants: GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100,B=1300

Building model...
Model constants: GOLD_TO_COLLECT=100,GEM_TO_COLLECT=100,B=1300

Computing reachable states...

Reachability (BFS): 1215 iterations in 0.80 seconds (average 0.000658, setup 0.00)

Time for model construction: 0.787 seconds.

Type:        MDP
States:      958894 (1 initial)
Transitions: 3325526
Choices:     3080702

Transition matrix: 898 nodes (4 terminal), 3325526 minterms, vars: 23r/23c/4nd

Computing rewards...
Switching engine since hybrid engine does yet support this computation...
Engine: Sparse

Building sparse matrix (transitions)... [n=958894, nc=3080702, nnz=3325526, k=4] [53.5 MB]
Building sparse matrix (transition rewards)... [n=958894, nc=3080702, nnz=61206, k=4] [16.1 MB]
Creating vector for state rewards... [7.3 MB]
Allocating iteration vectors... [2 x 7.3 MB]
TOTAL: [91.5 MB]

Starting iterations...
Iteration 190 (of 1300): 5.02 sec so far
Iteration 381 (of 1300): 10.04 sec so far
Iteration 572 (of 1300): 15.06 sec so far
Iteration 763 (of 1300): 20.07 sec so far
Iteration 954 (of 1300): 25.08 sec so far
Iteration 1145 (of 1300): 30.10 sec so far

Iterative method: 1300 iterations in 34.90 seconds (average 0.026262, setup 0.76)

Value in the initial state: 145.28750146751008

Time for model checking: 34.95 seconds.

Result: 145.28750146751008 (value in the initial state)


Overall running time: 36.16 seconds.