You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 13, 2023. It is now read-only.
The way with milliseconds (as now) is not correct, because several messages might come within the same millisecond.
Let's assign some ID to every message. They need to be increasing to make implementation of clients easier, so GUID doesn't work for that.
My proposal is to use a pair (timestamp, number). Every second (or millisecond) the counter is reset, and for every message the counter is increased. That way every message will have its own ID, and client can request messages from that ID, without hoping that no other messages happened in the same millisecond.
The counter may be one for the whole network, or one per channel/query, that's an implementation detail.
The text was updated successfully, but these errors were encountered:
There's a chance for an occasional double message, but I'm a bit sceptic to start over-engineering this. After all, I've been happily using the module for a year now. :)
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The way with milliseconds (as now) is not correct, because several messages might come within the same millisecond.
Let's assign some ID to every message. They need to be increasing to make implementation of clients easier, so GUID doesn't work for that.
My proposal is to use a pair
(timestamp, number)
. Every second (or millisecond) the counter is reset, and for every message the counter is increased. That way every message will have its own ID, and client can request messages from that ID, without hoping that no other messages happened in the same millisecond.The counter may be one for the whole network, or one per channel/query, that's an implementation detail.
The text was updated successfully, but these errors were encountered: