-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Harden scheduler to converge on common SchedulingPlan #2299
Conversation
893c027
to
44f9fd4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @tillrohrmann for creating this PR. It looks good to me but I left one question.
@@ -202,6 +204,21 @@ impl<T: TransportConnect> Scheduler<T> { | |||
nodes_config: &NodesConfiguration, | |||
placement_hints: impl PartitionProcessorPlacementHints, | |||
) -> Result<(), Error> { | |||
// todo temporary band-aid to ensure convergence of multiple schedulers. Remove once we | |||
// accept equivalent configurations and remove persisting of the SchedulingPlan | |||
if self.last_updated_scheduling_plan.elapsed() > Duration::from_secs(10) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this mean that a change can go unnoticed if it was updated less that 10seconds ago. Maybe the fetch/assign can happen automatically on scheduler plan version change instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scheduler plan is not part of the metadata management. That's why we can't watch for version changes and then update (if I understood you correctly). We need to fetch it from the metadata store.
In case this scheduler
instance tries to update the SchedulingPlan
, then it will see the updated SchedulingPlan
. This was already implemented.
What this PR tries to solve is if there is a scheduler that is still operating on a older SchedulingPlan
version but sees no need to update it (e.g. because all the nodes that it had assigned are alive). If this older SchedulingPlan
is different than the newer one, then there can be contradicting instructions being sent to the PPMs
. That's what this commit tries to solve by eventually bringing all nodes up to date wrt the SchedulingPlan
version. It's true that there is a window of 10s in which we don't see updates if we are still fine with our current SchedulingPlan
and see no need to update it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah sorry, I was under the wrong impression that anything stored in the Metadata is managed by default.
Thank you for the clarification
This commit adds an periodic SchedulingPlan check which allows multiple Schedulers to eventually agree on the same SchedulingPlan. Before, it could happen that there are two Schedulers that have different SchedulingPlans (e.g. one still having an older version). If those plans are contradicting, then the Schedulers would instruct nodes differently. The periodic update helps preventing this situation. Note: This logic can be removed once we allow the scheduler to accept equivalent partition processor placements. See restatedev#2242.
44f9fd4
to
d169372
Compare
This commit adds an periodic SchedulingPlan check which allows multiple Schedulers to eventually agree on the same SchedulingPlan. Before, it could happen that there are two Schedulers that have different SchedulingPlans (e.g. one still having an older version). If those plans are contradicting, then the Schedulers would instruct nodes differently. The periodic update helps preventing this situation.
Note: This logic can be removed once we allow the scheduler to accept equivalent partition processor placements. See #2242.