kubelet stopped posting node status #1682
imdmahajankanika
started this conversation in
General
Replies: 2 comments 1 reply
-
Please attach (or upload to the public file sharing service) must-gather archive |
Beta Was this translation helpful? Give feedback.
1 reply
-
@vrutkovs I found the root cause, it happened only for workers because kubelet config (machine config generated) of worker nodes was different than that of masters and there was a feature gate
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Describe the bug
Hello! We tried to upgrade our OKD cluster from 4.10->4.11 (4.11.0-0.okd-2022-07-29-154152, UPI Vsphere), the upgrade went well on master nodes, all the master nodes got successfully upgraded to FCOS 36, v1.24.0. But as soon as it started on worker nodes, the first worker node got updated to FCOS 36 but stayed in state "NotReady, schedulingDisabled".We tried to login to the respective worker node and checked kubelet status, it showed
And, with the command j
ournalctl -b -f -u kubelet.service
, it has shown below errorApart from that, there were no errors observed. Even the machineConfigPool was stuck in status "Updating" not "degraded". Also, I was able to oc login from that node.
And the result of rpm-ostree status was below:-
Version
4.11.0-0.okd-2022-07-29-154152, UPI Vsphere
How reproducible
During the OKD upgrade from 4.10.0-0.okd-2022-06-10-131327 to 4.11.0-0.okd-2022-07-29-154152
Log bundle
Beta Was this translation helpful? Give feedback.
All reactions