-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Events in watch mode stop being detected #53
Comments
@gmichels Do you have an active support entitlement with Splunk? If that is the case, It would be good to have a diag of the indexer to see if there are any errors in addition to looking at the kubeclient link you provided. Send back the case number if you can open a ticket. |
@dbaldwin-splunk sure thing, case number is 1213839. Shall I go ahead and produce the diag and attach to the ticket? Do you want the trace log as well? |
Forgot to mention pull mode works fine. Even when watch events are no longer sent, the pull events (e.g. pods, namespaces, nodes in the below config) continue to be sent over and properly received by Splunk:
Events also work just fine when not using |
Please attach all logs to the ticket. Prefer earlier so they don't roll off indexer in case of large volume of ingestion. |
Hey @gmichels! Apologies for the delay on this one. I have confirmed I can re-create this behavior. Working internally to see what we can do to finally resolve this. Thanks for your patience. |
We are having the same issue. We moved to |
This is fixed in splunk/kube-objects:1.1.2. The current objects chart defaults to 1.1.0, so until there is a new objects chart release, you need to override the image version in your values file. |
I am deploying the objects chart on a 1.11.2 K8S cluster with a watch on events only, and after roughly 45-60 min, new events are no longer detected and therefore not sent to Splunk. Bouncing the pod makes the events available again for the same 45-60 min, and the cycle repeats.
I had trace logging enabled and there is absolutely no notification or change in pattern to hint something is off. Searching around, it seems related to ManageIQ/kubeclient#273.
Can you confirm if you see this issue?
I can provide the full trace log if needed, but I'd rather do it via a new ticket in the Splunk Support Portal. Let me know if you'd like to have the log.
Thank you
The text was updated successfully, but these errors were encountered: