Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IJ does not contain a reward model #107

Open
sjunges opened this issue Sep 20, 2021 · 3 comments
Open

IJ does not contain a reward model #107

sjunges opened this issue Sep 20, 2021 · 3 comments

Comments

@sjunges
Copy link

sjunges commented Sep 20, 2021

Dear all,

The IJ model on the prism website contains a rewards structure steps [1], but the QCOMP model does not. Was this decision made purposefully?

[1] https://www.prismmodelchecker.org/casestudies/self-stabilisation.php#ij

Best,
Sebastian

@MKlauck
Copy link
Contributor

MKlauck commented Sep 20, 2021

Hi Sebastian,

I do not really know anymore why we removed the reward structure and the other formulas from the model at the time we added it to the benchmark set. I can only imagine that we first concetrated on adding one property per model, which in this case was the most simplest property, for which no reward structure or the other formulas of the PRISM model are needed. But it should be no problem to add more properties (and with that also the reward structures and the other model features.)

Perhaps @ahartmanns remembers more details.

Best,
Michaela

@tquatmann
Copy link
Contributor

If memory serves me right, we had a similar issue for philosophers-mdp, pnueli-zuck, and rabin which all have just a single property that also only exercises almost-sure reachability analysis.

I also don't see a reason to not have the remaining properties from the PRISM website included.

@ahartmanns
Copy link
Owner

Agreed, I see no reason not to accept a pull request adding missing properties in our benchmark set that exist in the PRISM one. We should strive for a "natural" JANI encoding, so e.g. in the IJ case, I would expect a step bound in the property instead of a reward bound for bounded properties; for expected-reward properties, a simple "1" reward with step accumulation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants