-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm re-reads kubeconfig in init phase bootstrap-token #3129
Comments
There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:
Please see the group list for a listing of the SIGs, working groups, and committees available. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Not sure if this is SIG cli or cluster-lifecycle, both seem to be suitable, I'd probably go with the former, since this is mostly about passing data between subcommands, not about the lifecycle itself. |
/transfer kubeadm |
diff components are owned by diff sigs, in this case it's SIG CL for kubeadm |
that is a peculiar use case and i don't think i have seen it before.
one problem is that data.Client() might not be the same as a client constructed from data.KubeConfigPath(). in dry-run a fake client is used at master. i agree that it seems awkward to construct a new client later from the KubeConfigPath(). it seems to be that should be something done during the init "data" initialization only once and a clientcmdapi.Config pointer should be passed around. potential fix:
would you like to try sending a PR? |
given the user can just write the file on disk in a safe location and invoke kubeadm with a super user i don't think we should backport a fix to older releases. |
one surprise here is that it reaches line 56 and panics while
doesn't throw any errors. |
We retrieve these kubeconfigs from a remote store and use them within kubeadm and this was an effort to minimise saving sensitive data on disk (in case we failed to clear them up + not to leak them to other processes).
Thanks for the pointers. I was thinking along similar lines, but I thought that this interface assertion - https://github.com/kubernetes/kubernetes/blob/9d62330bfa31a4fce28093d052f65ff0e88ac3a0/cmd/kubeadm/app/cmd/phases/upgrade/apply/bootstraptoken.go#L48 - would require all implementation to implement this new kubeconfig method... but now I see that there's only one implementation of this interface, so we should be good. I'll also try and look up the first loading of said kubeconfig, I'm struggling a bit to navigate the Cobra abstractions. I also don't know if there are other commands affected, but we can try this one first and then we'll see. |
OK, got it to work. I'll see if I can polish the outstanding issues there (the dry run, reuse of a method), add some docs, and I'll submit a PR at some point today or tomorrow. The diff doesn't look super complex: https://gist.github.com/kokes/9f1601f04cfc8e4a2ee832513b4a3613 |
i can comment more on the PR. thanks! |
What happened?
I ran
kubeadm
with an in-memory kubeconfig literal (here reproduced with a file on disk and piped in, the principle is the same) and while many commands worked just fine, this init phase ended in a panic.What did you expect to happen?
I expected the command to run just fine.
How can we reproduce it (as minimally and precisely as possible)?
With a kubeconfig in
admin.conf
, run this in 9d62330bfa31a4fce28093d052f65ff0e88ac3a0:Anything else we need to know?
I added a debug line to
LoadFromFile
and indeed it gets called twice, so if the source is not to be read repeatedly (e.g. a pipe), it will fail the second time around, because the resulting kubeconfig will be emptyIn the case of this kubeadm phase, it gets read first in
client, err := data.Client()
, but then the absolute path is retrieved and passed toclusterinfophase.CreateBootstrapConfigMapIfNotExists(client, data.KubeConfigPath())
, where it gets read again.Kubernetes version
9d62330bfa31a4fce28093d052f65ff0e88ac3a0
Cloud provider
OS version
Darwin foo-bar 23.6.0 Darwin Kernel Version 23.6.0: Wed Jul 31 20:53:05 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T8112 arm64
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: