-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
90% of stats are memcached_slab_* #178
Comments
In fact for us it's worse than this. We run a collector on a common host image, so it's every docker container we deploy, not just related ones. |
I believe the issue you are referring to is #118? Given
I think the slab metrics are a really good candidate given the high cardinality. I would like the flag naming to be in line with the node and mysql exporter (which have extensive on/off toggles for classes of data), so something like |
@SuperQ shout if you disagree, otherwise I would encourage @jdmarshall to send a PR 😄 |
I looked at the code for the slab metadata before filing the issue. It is... not well-contained. It's going to take someone with experience in this code base to do it without breaking other things. |
Looking for ways to economize on total stats production, I found that on our project we are seeing around
11000 stats/interval total for 2 memcached clusters.
9600 stats/interval with the prefix memcached_slab
I don't even know how someone is supposed to interpret that telemetry. I certainly don't think users want to pay 90% of their stats budget for slab allocator data. I'd like a way to turn that off.
In a related ticket that was closed by the author, it was suggested to drop stats at the collector, but collector configs could be shared by any number of services running on the same box, and getting communal access to that configuration is a whole other layer of logistics to manage, possibly 2. With swarm or Kubernetes managing them you are asking for one person to drop all of the stats collection for several teams.
The text was updated successfully, but these errors were encountered: