-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider modelling failure reasons into Limbo spec #43
Comments
@woodruffw This is low priority, but just something I've been thinking about. |
Yes, thanks for calling this out! This is something that's also been in the back of my mind. One challenge here is that each implementation we might want to test has its own error codes (and stability rules/lack of rules around them). For implementations that return some kind of stringified error message, we could maybe do something like a mapping of
(maybe with regexps?) and then each implementation's handler could check for its own identifier and attempt to match it. |
I was thinking that we'd push that onto each individual harness. So in Limbo, we just say that a test ought to fail due to a reason like: Then each individual harness would be responsible for doing something like (excuse the syntax, hope you get the idea):
That way, Limbo doesn't need to know what implementations that it's being used to test. That would feel more "pure" but we should do what is more practical so if there's an advantage to baking that information into Limbo, let's just do that. The main problem I foresee is that the error messages (whether they're textual or just a return code) might not be at the same level of granularity as what Limbo specifies. Actually, our |
(Whoops, accidentally closed.)
Yeah, this is a good point -- limbo probably shouldn't reference any implementation in particular, so baking error strings like this is probably a bad idea.
Yeah, this is going to be a uniform issue with validators, and is arguably something they shouldn't care about: the validator is really only required to return a yes/no answer, and there's no "right" way to flatten a set of failed prospective validation paths into a concise and cogent human-readable error message. At the best, I think the most we can hope for is somewhat consistent error messages when the EE/leaf itself is invalid or causes an error; for anything in the rest of the chain, nothing is guaranteed. |
One thing I've noticed during development is that it's very easy for chains to be rejected via the wrong check. Unless you debug the test yourself, it's difficult to know whether you're exercising the intended behaviour or whether it's bouncing off some other check. And once we've verified that it's being rejected for the right reason, there's no guarantee that the logic will remain that way over time.
Obviously we don't want to bind Limbo to
pyca/cryptography
in any meaningful way but I was wondering if there's value in having information about why a particular test ought to fail. Each harness can then optionally map their own error messages to the Limbo failure reason enumeration to check whether it hit the right check or not.The text was updated successfully, but these errors were encountered: