-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(outputs): Only copy metric if its not filtered out #15883
feat(outputs): Only copy metric if its not filtered out #15883
Conversation
Thanks a lot @LarsStegman for your investigation!!! How about moving the copy into the for metric := range unit.src {
for i, output := range unit.outputs {
output.AddMetric(metric, i < len(a.Config.Outputs) - 1)
}
} and in the model do func (r *RunningOutput) AddMetric(m telegraf.Metric, requireCopy bool) {
metric := m
ok, err := r.Config.Filter.Select(metric)
if err != nil {
r.log.Errorf("filtering failed: %v", err)
} else if !ok {
r.metricFiltered(metric)
return
}
if requireCopy {
metric = m.Copy()
}
r.Config.Filter.Modify(metric)
if len(metric.FieldList()) == 0 {
r.metricFiltered(metric)
return
}
...
} This way we do not need to expose the interna of the output model into the agent. |
Yeah, that's also a good solution for me. I will make that change! |
784da6d
to
a61bddf
Compare
@srebhan I am not sure why the memory leak test is failing. I should be making fewer allocations, not more. Do you have any idea? |
@LarsStegman this is unrelated and unfortunately a flaky test... We need to look at it some time but currently things are busy. Ignore the issue for now... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! I wonder if we should keep the original AddMetric
function signature and always copy the metric and have a second function AddMetricNoCopy
which does not copy the metric. This way we save a few if
s and can keep the tests as they were...
That does sound like a better solution to be honest. All the ifs were getting a bit iffy. |
a61bddf
to
b080e56
Compare
Alright, I do like this better! |
589c177
to
6da9241
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. Just get rid of the underscore in the function name and we are good to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Thanks @LarsStegman!
@LarsStegman you need this diff --git a/plugins/inputs/cloud_pubsub_push/cloud_pubsub_push_test.go b/plugins/inputs/cloud_pubsub_push/cloud_pubsub_push_test.go
index 252b843fc..9e8aa07d1 100644
--- a/plugins/inputs/cloud_pubsub_push/cloud_pubsub_push_test.go
+++ b/plugins/inputs/cloud_pubsub_push/cloud_pubsub_push_test.go
@@ -196,6 +196,7 @@ func TestServeHTTP(t *testing.T) {
for m := range d {
ro.AddMetric(m)
ro.Write() //nolint:errcheck // test will fail anyway if the write fails
+ m.Accept()
}
}(dst)
to pass the unit-tests. Those tests are horrible but that's the easiest fix... |
Download PR build artifacts for linux_amd64.tar.gz, darwin_arm64.tar.gz, and windows_amd64.zip. 📦 Click here to get additional PR build artifactsArtifact URLs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Thanks @LarsStegman!
if err != nil { | ||
r.log.Errorf("filtering failed: %v", err) | ||
} else if !ok { | ||
r.MetricsFiltered.Incr(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Just one minor nitpick: is there any reason this isn't calling r.metricFiltered(metric)
like the similar functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that function also calls Drop on the metric, which we should not do if we haven't copied it yet, since we haven't taken ownership of it until then.
Summary
This PR makes sure that an output plugin will really select a metric for outputting, before copying it. This improvement had a big impact on runtime/gc time in real world performance. After this change the amount of gc time went down from 55% of the CPU time to 25%.
In our real world case every output plugin was only interested in a subset of the metric and there was no overlap between the subsets. This means that (x-1)/x% of all the copied metrics were immediately discarded, where x is the number of outputs.
Checklist
Related issues
resolves #15882