Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Events in watch mode stop being detected #53

Closed
gmichels opened this issue Dec 5, 2018 · 8 comments
Closed

Events in watch mode stop being detected #53

gmichels opened this issue Dec 5, 2018 · 8 comments
Labels
bug Something isn't working support Support Case Open

Comments

@gmichels
Copy link
Contributor

gmichels commented Dec 5, 2018

I am deploying the objects chart on a 1.11.2 K8S cluster with a watch on events only, and after roughly 45-60 min, new events are no longer detected and therefore not sent to Splunk. Bouncing the pod makes the events available again for the same 45-60 min, and the cycle repeats.

I had trace logging enabled and there is absolutely no notification or change in pattern to hint something is off. Searching around, it seems related to ManageIQ/kubeclient#273.

Can you confirm if you see this issue?

I can provide the full trace log if needed, but I'd rather do it via a new ticket in the Splunk Support Portal. Let me know if you'd like to have the log.

Thank you

@dbaldwin-splunk
Copy link
Contributor

@gmichels Do you have an active support entitlement with Splunk? If that is the case, It would be good to have a diag of the indexer to see if there are any errors in addition to looking at the kubeclient link you provided. Send back the case number if you can open a ticket.

@gmichels
Copy link
Contributor Author

gmichels commented Dec 5, 2018

@dbaldwin-splunk sure thing, case number is 1213839.

Shall I go ahead and produce the diag and attach to the ticket? Do you want the trace log as well?

@gmichels
Copy link
Contributor Author

gmichels commented Dec 5, 2018

Forgot to mention pull mode works fine. Even when watch events are no longer sent, the pull events (e.g. pods, namespaces, nodes in the below config) continue to be sent over and properly received by Splunk:

objects:
  core:
    v1:
      - name: pods
      - name: namespaces
      - name: nodes
      - name: events
        mode: watch

Events also work just fine when not using mode: watch.

@dbaldwin-splunk
Copy link
Contributor

Please attach all logs to the ticket. Prefer earlier so they don't roll off indexer in case of large volume of ingestion.

@gmichels
Copy link
Contributor Author

gmichels commented Dec 5, 2018

I generated the diag yesterday right after your comment and attached to the ticket this morning, along with the pod trace log, for which watching the events API worked for 46 min (timestamps are in EST):

image

@matthewmodestino matthewmodestino added the support Support Case Open label Dec 6, 2018
@matthewmodestino
Copy link
Collaborator

Hey @gmichels!

Apologies for the delay on this one.

I have confirmed I can re-create this behavior.

Working internally to see what we can do to finally resolve this.

Thanks for your patience.

@matthewmodestino matthewmodestino added the bug Something isn't working label Feb 6, 2019
@bilby91
Copy link

bilby91 commented Apr 9, 2019

We are having the same issue. We moved to mode: pull for the moment on events.

@gmichels
Copy link
Contributor Author

This is fixed in splunk/kube-objects:1.1.2.

The current objects chart defaults to 1.1.0, so until there is a new objects chart release, you need to override the image version in your values file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working support Support Case Open
Projects
None yet
Development

No branches or pull requests

4 participants