-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[Backport 7.76.x] [networks] Remove faulty PID reuse detection logic #45716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Backport 7.76.x] [networks] Remove faulty PID reuse detection logic #45716
Conversation
…does this PR do? This PR removes PID reuse detection logic which resulted in way too many false positives. The root cause was, the EVP stream doesn't have the same clock timestamp as procfs data, making PID reuse detection with this method impossible. Bryce had [commented on this](#43099 (comment)) originally, noting that the rest of the agent doesn't currently catch this scenario, in retrospect I should have deleted this code back then. ### Motivation Improve resolv.conf detection rate. ### Describe how you validated your changes The buggy code no longer exists. TestDNSWorkload should continue to pass. Co-authored-by: stuart.geipel <[email protected]> (cherry picked from commit 3937bbf) ___ Co-authored-by: Stuart Geipel <[email protected]>
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
28 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 0b30861 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +1.94 | [-1.11, +4.99] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +1.94 | [-1.11, +4.99] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | +0.42 | [+0.20, +0.63] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics | memory utilization | +0.39 | [+0.18, +0.59] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.34 | [+0.17, +0.50] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.29 | [+0.08, +0.50] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.17 | [+0.12, +0.23] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.06 | [+0.01, +0.11] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.05 | [-0.44, +0.55] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | +0.02 | [-0.11, +0.14] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.01 | [-0.03, +0.06] | 1 | Logs bounds checks dashboard |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.11, +0.13] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.01 | [-0.04, +0.05] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.01 | [-0.10, +0.08] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | -0.02 | [-0.08, +0.05] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | -0.02 | [-0.10, +0.06] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.02 | [-0.43, +0.39] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.03 | [-0.42, +0.36] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | -0.14 | [-0.29, +0.01] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.22 | [-0.45, +0.01] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | -0.23 | [-0.33, -0.13] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.27 | [-0.31, -0.23] | 1 | Logs bounds checks dashboard |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.96 | [-1.03, -0.89] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | -1.49 | [-2.99, +0.02] | 1 | Logs bounds checks dashboard |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
|
/merge |
|
View all feedbacks in Devflow UI.
The expected merge time in
|
Backport 3937bbf from #45686.
What does this PR do?
This PR removes PID reuse detection logic which resulted in way too many false positives. The root cause was, the EVP stream doesn't have the same clock timestamp as procfs data, making PID reuse detection with this method impossible.
Bryce had commented on this originally, noting that the rest of the agent doesn't currently catch this scenario, in retrospect I should have deleted this code back then.
Motivation
Improve resolv.conf detection rate.
Describe how you validated your changes
The buggy code no longer exists. TestDNSWorkload should continue to pass.