-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 cluster: split MachinesReady and MachinesUpToDate into ControlPlane and Worker specific conditions #11461
base: main
Are you sure you want to change the base?
🌱 cluster: split MachinesReady and MachinesUpToDate into ControlPlane and Worker specific conditions #11461
Conversation
… WorkerMachinesUpToDate
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -602,66 +632,84 @@ func setWorkersAvailableCondition(ctx context.Context, cluster *clusterv1.Cluste | |||
v1beta2conditions.Set(cluster, *workersAvailableCondition) | |||
} | |||
|
|||
func setMachinesReadyCondition(ctx context.Context, cluster *clusterv1.Cluster, machines collections.Machines, getDescendantsSucceeded bool) { | |||
type machinesReadySetter struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I wrong or are we one step away from having a single machinesConditionSetter that works for both Ready and UpToDate?
What we need is to additionally pass the the condition to aggregate, use it in the NewAggregateCondition call + and when building the ""Failed to aggregate Machine's %s conditions" message.
The filter for upToDate machines can be moved out this func and applied before calling the machinesConditionSetter.
WDYT?
@@ -65,8 +65,38 @@ func (r *Reconciler) updateStatus(ctx context.Context, s *scope) { | |||
setControlPlaneAvailableCondition(ctx, s.cluster, s.controlPlane, s.controlPlaneIsNotFound) | |||
setControlPlaneInitializedCondition(ctx, s.cluster, s.controlPlane, s.descendants.controlPlaneMachines, s.infraClusterIsNotFound, s.getDescendantsSucceeded) | |||
setWorkersAvailableCondition(ctx, s.cluster, expv1.MachinePoolList{}, s.descendants.machineDeployments, s.getDescendantsSucceeded) | |||
setMachinesReadyCondition(ctx, s.cluster, allMachines, s.getDescendantsSucceeded) | |||
setMachinesUpToDateCondition(ctx, s.cluster, allMachines, s.getDescendantsSucceeded) | |||
machinesReadySetter{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about having setControlPlane/WorkerMachinesReadyCondition and setControlPlane/WorkerMachinesUpToDateCondition functions with just the call to the Setter, this will help in keeping this updateStatus readable.
Also, it will make it easier to test those setter (currently we are duplicating setters configuration in test, which is sort of error prone)
Oh, I forgot, we should also add new conditions to https://github.com/kubernetes-sigs/cluster-api/blob/main/util/conditions/v1beta2/sort.go |
@chrischdi: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
)), | ||
), | ||
}, | ||
) | ||
if err != nil { | ||
log.Error(err, "Failed to aggregate Machine's UpToDate conditions") | ||
log.Error(err, "Failed to aggregate Machine's Ready conditions") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be
log.Error(err, "Failed to aggregate Machine's Ready conditions") | |
log.Error(err, fmt.Sprintf("Failed to aggregate Machine's %s conditions", s.condition)) |
name: "One machine up-to-date", | ||
cluster: fakeCluster("c"), | ||
machines: []*clusterv1.Machine{ | ||
fakeMachine("up-to-date-1", v1beta2Condition(metav1.Condition{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know it doesn't change the test result, but what about adding controlPlane(true)
, for machines in this test like you did for TestSetControlPlaneMachinesReadyCondition
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Part of #11105
/area conditions