-
Notifications
You must be signed in to change notification settings - Fork 1.4k
fix(AdmissionController): Unify Caching for Secrets within DCA #45713
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
30 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
f4181b1 to
80044bf
Compare
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 1576f3a Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | -4.79 | [-7.81, -1.77] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | quality_gate_logs | % cpu utilization | +1.31 | [-0.21, +2.84] | 1 | Logs bounds checks dashboard |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +1.03 | [+0.91, +1.15] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | +0.70 | [+0.48, +0.92] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_metrics | memory utilization | +0.62 | [+0.46, +0.77] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +0.56 | [+0.34, +0.78] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.44 | [+0.23, +0.64] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | +0.38 | [+0.15, +0.61] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | +0.32 | [+0.24, +0.39] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.21 | [+0.16, +0.26] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.12 | [+0.08, +0.17] | 1 | Logs bounds checks dashboard |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.11 | [+0.06, +0.16] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | +0.10 | [-0.02, +0.21] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.05 | [-0.35, +0.46] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.04 | [-0.01, +0.08] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.04 | [-0.13, +0.20] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.00 | [-0.53, +0.52] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.00 | [-0.39, +0.38] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.01 | [-0.13, +0.11] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.01 | [-0.14, +0.13] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.01 | [-0.10, +0.09] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.05 | [-0.08, -0.01] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_logs | memory utilization | -0.54 | [-0.61, -0.48] | 1 | Logs |
| ➖ | docker_containers_cpu | % cpu utilization | -4.79 | [-7.81, -1.77] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
What does this PR do?
Eliminates the 5-minute certificate cache in the admission webhook HTTP server by using the shared Secret informer cache instead. Using multiple independent caches produced problems whenever the Secret value updated in informer cache but not the HTTP server's cache (5 min delay).
Motivation
During certificate rotation, there was a potential 5-minute downtime window where:
By using the same informer cache that the Webhook Controller uses, both components stay synchronized.
Describe how you validated your changes
Unit Tests: Existing secret controller tests verify that secret rotations/refreshes still propagate smoothly.
Manual: Deploy the admission controller with
mutateUnlabelled:trueandfailurePolicy:fail.Before the patch, when you delete the secret (to trigger a rotation), you will see
FailedCreateevents like the following for 5 minutes:Now, deleting the secret and deploying new workloads will cause no errors!
Additional Notes
Original implementation considered storing 2 certificates within the CA Bundle with overlapping lifetimes. For example C1 is valid from T1-T10 and C2 is valid from T5-T15. Both the certificates are stored in the secret. When we hit a time like T8 when we want to transition to the next valid certificate, it is already present throughout all the relevant clients.
Before this change, the admission controller went uncertified for worst case 5 minutes. Now, the worst case is simply how long the API request to update the mutatingwebhook controller with the new configuration takes. The shared informed on the cluster agent is notified of a change to the secret cert. It's possible a request comes in to the HTTP server and the new cert is used, however, under a race, the mutating webhook configuration has not been updated yet with the new cert.
From an initial search online, API Server requests can operate on the order of 100ms. Our cache had a lifetime of 300,000ms and so this would be a 3000x increase in availability of the admission controller which can land us a quick easy win.