-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue for changing the cluster (reconf) #970
Comments
In order to address the issue IMO kubeadm should clearly split updates (changes to the cluster configuration) from upgrades (change of release) by removing any option to change the cluster config during upgrades and creating a separated new Main rationale behind this opinion
|
/kind feature |
@neolit123 I have a KEP in flight for this |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Do we still need this? |
i've removed the the way this overlaps with the Kustomize ideas: /lifecycle frozen |
@fabriziopandini can this be renamed as the ticket for "change the cluster"? |
What's the current status of this? I think it's common to update some configs after I personally expect something like:
or
So that we can update any components' config, especially ApiServer, ControllerManager, in a streamlined way. Thank you! |
@brightzheng100 |
That solution looks complicated and please proceed to make it a complete one! Anyway, I tried it out by simply using the A detailed case could be referred from here -- hope it helps. |
it's really not recommended and i'm trying to deprecate it because of the "failures" part that you mention. also its really not suited to reconfigure multi-control plane setups. the existing workaround to modifying the cluster is:
with a proper SSH setup this is not that complicated of a bash script, but still not the best UX for new users. |
@brightzheng100 thanks for your feedback. UX is a major concern and this is why we are prototyping around this proposal |
This is not required based on my experiments.
I haven't found any reason yet to update these ConfigMaps manually, if we may just want to enable/disable some features in
Currently I have a single-master env so haven't tried it out yet, but yup I think we have to sync up these static pods' manifests. Frankly, building a
|
joining new control-planes to the cluster would still need an updated version of ClusterConfiguration.
sadly, there are many reasons for the coredns pods to enter a crashloop, best way is to look in the logs. if nothing works removing the deployment and re-applying a CNI plugin too should fix it.
that is why
i agree, for patching CP manifests on a single-CP you are better of just applying manual steps instead of the operator.
a similar approach was discussed, where we execute "a command" on all nodes to apply upgrade / re-config, but we went for the operator instead because that's a common pattern in k8s. |
For me (on Kubernetes 1.17.11) the following worked.
where Additional information
|
archiving this ticket. there are no new feature requests to modify a cluster with or without an operator. |
This is the tracking for "change the cluster":
The feature request is to allow easy to use UX for users that want to change properties of a running cluster.
existing proposal docs:
TODO
kubeadm operator:
#1698
User story:
#1581
The text was updated successfully, but these errors were encountered: