Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Harden scheduler to converge on common SchedulingPlan #2299

Merged
merged 1 commit into from
Nov 22, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 31 additions & 11 deletions crates/admin/src/cluster_controller/scheduler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0.

use std::collections::{BTreeMap, BTreeSet};

use rand::seq::IteratorRandom;
use std::collections::{BTreeMap, BTreeSet};
use std::time::{Duration, Instant};
use tracing::debug;
use xxhash_rust::xxh3::Xxh3Builder;

Expand Down Expand Up @@ -90,6 +90,7 @@ impl<T: PartitionProcessorPlacementHints> PartitionProcessorPlacementHints for &

pub struct Scheduler<T> {
scheduling_plan: SchedulingPlan,
last_updated_scheduling_plan: Instant,

task_center: TaskCenter,
metadata_store_client: MetadataStoreClient,
Expand Down Expand Up @@ -118,6 +119,7 @@ impl<T: TransportConnect> Scheduler<T> {

Ok(Self {
scheduling_plan,
last_updated_scheduling_plan: Instant::now(),
task_center,
metadata_store_client,
networking,
Expand Down Expand Up @@ -180,12 +182,12 @@ impl<T: TransportConnect> Scheduler<T> {
let scheduling_plan = self.try_update_scheduling_plan(scheduling_plan).await?;
match scheduling_plan {
UpdateOutcome::Written(scheduling_plan) => {
self.scheduling_plan = scheduling_plan;
self.assign_scheduling_plan(scheduling_plan);
break;
}
UpdateOutcome::NewerVersionFound(scheduling_plan) => {
self.scheduling_plan = scheduling_plan.clone();
builder = scheduling_plan.into_builder();
self.assign_scheduling_plan(scheduling_plan);
builder = self.scheduling_plan.clone().into_builder();
}
}
} else {
Expand All @@ -202,6 +204,21 @@ impl<T: TransportConnect> Scheduler<T> {
nodes_config: &NodesConfiguration,
placement_hints: impl PartitionProcessorPlacementHints,
) -> Result<(), Error> {
// todo temporary band-aid to ensure convergence of multiple schedulers. Remove once we
// accept equivalent configurations and remove persisting of the SchedulingPlan
if self.last_updated_scheduling_plan.elapsed() > Duration::from_secs(10) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't this mean that a change can go unnoticed if it was updated less that 10seconds ago. Maybe the fetch/assign can happen automatically on scheduler plan version change instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scheduler plan is not part of the metadata management. That's why we can't watch for version changes and then update (if I understood you correctly). We need to fetch it from the metadata store.

In case this scheduler instance tries to update the SchedulingPlan, then it will see the updated SchedulingPlan. This was already implemented.

What this PR tries to solve is if there is a scheduler that is still operating on a older SchedulingPlan version but sees no need to update it (e.g. because all the nodes that it had assigned are alive). If this older SchedulingPlan is different than the newer one, then there can be contradicting instructions being sent to the PPMs. That's what this commit tries to solve by eventually bringing all nodes up to date wrt the SchedulingPlan version. It's true that there is a window of 10s in which we don't see updates if we are still fine with our current SchedulingPlan and see no need to update it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry, I was under the wrong impression that anything stored in the Metadata is managed by default.

Thank you for the clarification

let new_scheduling_plan = self.fetch_scheduling_plan().await?;

if new_scheduling_plan.version() > self.scheduling_plan.version() {
debug!(
"Found a newer scheduling plan in the metadata store. Updating to version {}.",
new_scheduling_plan.version()
);
}

self.assign_scheduling_plan(new_scheduling_plan);
}

let mut builder = self.scheduling_plan.clone().into_builder();

self.ensure_replication(&mut builder, alive_workers, nodes_config, &placement_hints);
Expand All @@ -214,12 +231,17 @@ impl<T: TransportConnect> Scheduler<T> {
.into_inner();

debug!("Updated scheduling plan: {scheduling_plan:?}");
self.scheduling_plan = scheduling_plan;
self.assign_scheduling_plan(scheduling_plan);
}

Ok(())
}

fn assign_scheduling_plan(&mut self, scheduling_plan: SchedulingPlan) {
self.scheduling_plan = scheduling_plan;
self.last_updated_scheduling_plan = Instant::now();
}

async fn try_update_scheduling_plan(
&self,
scheduling_plan: SchedulingPlan,
Expand All @@ -237,21 +259,19 @@ impl<T: TransportConnect> Scheduler<T> {
Err(err) => match err {
WriteError::FailedPrecondition(_) => {
// There was a concurrent modification of the scheduling plan. Fetch the latest version.
let scheduling_plan = self
.fetch_scheduling_plan()
.await?
.expect("must be present");
let scheduling_plan = self.fetch_scheduling_plan().await?;
Ok(UpdateOutcome::NewerVersionFound(scheduling_plan))
}
err => Err(err.into()),
},
}
}

async fn fetch_scheduling_plan(&self) -> Result<Option<SchedulingPlan>, ReadError> {
async fn fetch_scheduling_plan(&self) -> Result<SchedulingPlan, ReadError> {
self.metadata_store_client
.get(SCHEDULING_PLAN_KEY.clone())
.await
.map(|scheduling_plan| scheduling_plan.expect("must be present"))
}

fn ensure_replication(
Expand Down
Loading