-
Notifications
You must be signed in to change notification settings - Fork 858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Required int/float sizes implementations need to support? #1029
Comments
Related issue is the maximum values; came up in: toml-lang/toml-test#151 Should parsers throw an error in numbers larger than int64, even though the implementation can technically support it? Right now it just says:
But it doesn't say whether e.g. int128 can be supported. Floats have a similar issue, with the added complication that it can return Inf for values out of range. In my testing about half return an error and about half return Inf (see linked issue above). I'm not sure what makes the most sense for this; I'd say an error is probably the most consistent and easiest. At the very least it should be clarified it may be supported. |
Good question. toml++ is C++17 and embedded folks are rarely lucky enough to use anything post-C++11, so I don't have a clear idea there 😅 (the ones I'm aware of that have used TOML++ haven't had any issues with float64.) |
"Should" doesn't mean "must", otherwise "must" would have been used. I would interpret it as per RFC 2119: strongly advised, but not absolutely required. So implementations can ignore this expectation if they really have to, but only then, and they should certainly document it. My understanding in regard to JavaScript is that any TOML implementation should switch to BigInt when encountering large integers outside the safe integer range. That's certainly doable, so I see no good excuse to ignore this expectation here. Also the spec only gives a minimum expectation here – implementations that can accept even bigger integers are certainly fine to do so. Only if they don't and encounter an integer outside their handled range they must throw an error – that's what the spec says. |
I was gonna say pretty much what @ChristianSi said. It's not a hard requirement and, like many other details that are deferred to the implementations to avoid stifling things or making it too difficult to implement, the spec affords some flexibility to the implementors avoid additional complexity to be compatible when their intended platform has constraints, while still leaving room for going "above and beyond" when they don't/are willing to take on more complexity or if a need is present. |
Right so, but what "must" it support then? Is an implementation supporting only int8 considered compliant? Right now there is no mention at all of the minimum requirements, so in that context it does come of more as a "must" than "should. I have the impression that's how most people interpret it anyway. |
From RFC 2119:
I think this does a decent job of describing the intent here. |
So what you're saying is that an implementation supporting only int8 is considered compliant? I don't think RFC-language is too helpful here, because in the spec we never mention that (and in RFCs it's all-caps to clarify it's SHOULD-according-to-RFC2119). The text should be clear from the text itself, using the "common-sense meaning". This can mean adding a "there is no minimum or maximum int size, but implementations should [..]" text, or saying "implementations must support such-and-such, and should support this-and-that". Or maybe something else. |
I'm not saying we're using SHOULD-according-to-RFC2119 but I do think that RFC's formal description does a good job of describing the intended meaning here -- you are expected to do X, but you could also not if there's a good reason to.
Yea, sure. If you wanted to do that for some reason, go ahead and do that. I think it'd be OK to do that; at the cost of knowing that a bunch of legitimate usage patterns would no longer be supported with this model. |
I'd say to be "fully compliant", an implementation must also support "should" requirements. I would also consider that a suitable standard for test suites – if 64-bit integers aren't fully supported, some of the tests may fail, and that's fine. If we change anything in the spec, I'd be in favor of simply changing "should" to "must", making this a strict requirement. But this hasn't caused any issues so far and I doubt it will, so I think we have way more important things to focus on. |
The spec says:
"Should" can mean any number of things.
Does this mean "must" or "it's also okay to accept less"?
Practically speaking, int64 is annoying in JS as it only has floats, which is only accurate up to int54 or thereabouts. You can use BigInt and some implementations do this, but it's not very fluid.
This is also an issue for JSON compatibility; which like JS only has ints.
I'm not sure if there are other cases where int64 posses issues? But based on the JSON alone I'd say that it should be changed to "MUST accept int32, RECOMMEND accepting at least int64".
For float64 I'm not sure if there are environments or use-cases where it doesn't work? Maybe embedded use cases @marzer? Otherwise I'd be in favour of just requiring float64.
The text was updated successfully, but these errors were encountered: