Worker Kernel Panic: VFS: Busy inodes after unmount of ceph (ceph) #1753
alexiusflavius
started this conversation in
General
Replies: 1 comment 1 reply
-
This needs to be reported to Fedora - https://bugzilla.redhat.com/enter_bug.cgi first |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Under moderate / high load worker nodes were rebooting
Testing OKD on Openstack I acme across an issue where workers would just reboot for no apparent reason,
Checking worker logs the stayed on Openstack side I found that panic is caused by busy inodes,
Everything is working fine on the production cluster running 4.12.0-0.okd-2023-02-18-033438,
Pods are running on Rocky 8.8 and 9.2 with the same issue,
Additional Pod and ceph FS info
CEPH FS is running on 17.2.6 and I didn't see the above error anywhere else
Pods are using plain k8 functionality:
Version
OKD version release: 4.13.0-0.okd-2023-09-03-082426 running on Openstack Centos9/wallaby
How reproducible
Workers reboot happens all the time - but can't be predicted
Beta Was this translation helpful? Give feedback.
All reactions