You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Halt advancement no values found for nginx metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found
#1689
Can you run it according to the documentation? I kept getting errors during canary:
Halt advancement no values found for nginx metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found
......
Advance podinfo.test canary weight 5
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test
It seems that testloader cannot resolve "app.example.com".
Even if I access it through the IP address, the ingress does not pass the traffic to the corresponding service.
Describe the bug
I am using flagger with nginx ingress and prometheus.
When I introduce new version of pods, analysis gets kicked in. But I am seeing below message in flagger logs
{"level":"info","ts":"2024-07-25T10:44:21.119Z","caller":"controller/events.go:33","msg":"Starting canary analysis for podinfo.test","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:44:21.128Z","caller":"controller/events.go:33","msg":"Pre-rollout check acceptance-test passed","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:44:21.154Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 5","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:44:30.247Z","caller":"controller/events.go:45","msg":"Halt advancement no values found for nginx metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:44:40.271Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 10","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:44:50.266Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 15","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:45:00.270Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 20","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:45:10.273Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 25","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:45:20.271Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 30","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:45:30.269Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 35","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:45:40.267Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 40","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:45:50.266Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 45","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:46:00.268Z","caller":"controller/events.go:33","msg":"Advance podinfo.test canary weight 50","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:46:10.264Z","caller":"controller/events.go:33","msg":"Copying podinfo.test template spec to podinfo-primary.test","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:46:20.247Z","caller":"controller/events.go:45","msg":"podinfo-primary.test not ready: waiting for rollout to finish: 1 old replicas are pending termination","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:46:30.252Z","caller":"canary/hpa_reconciler.go:152","msg":"HorizontalPodAutoscaler v2 podinfo-primary.test updated","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:46:30.252Z","caller":"controller/events.go:33","msg":"Routing all traffic to primary","canary":"podinfo.test"}
{"level":"info","ts":"2024-07-25T10:46:40.261Z","caller":"controller/events.go:33","msg":"Promotion completed! Scaling down podinfo.test","canary":"podinfo.test"}
I checked the prometheus query of Nginx also, it should fine and it provides 100%
Inspite of the fact, I am seeing halt advancement logs for request success rate, still analysis is carried forward and completed successfully.
Can you please check whats wrong here?
To Reproduce
I am following the steps given here:
https://docs.flagger.app/tutorials/nginx-progressive-delivery
Expected behavior
It should not print the error for metrics
Additional context
The text was updated successfully, but these errors were encountered: