-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error for failed revision is not reported due to scaling to zero #14157
Comments
Thanks, this has always bothered me, and the repro case is super-helpful! Aggregating all Pods' errors into a Revision seems likely to cause a hotspot, but capturing errors on the one-to-zero case seems pretty safe, and even a bit clever. Thinking further, we might be able to avoid spam on the Revision using a model a bit like Events where we rate-limit the updates and don't update if the error is the same as already reported. |
@dprotaso I don't think this is the same as the pods are not sticking around, the image pull error does not block the pods from scaling down. I will double check. |
Indeed this is different afaik. |
We had some more runs setting minScale=1 and progressDeadline=10 that exposed another problem. Looking at the logs we are actually hitting a K8s limitations where deployment is stuck with conditions:
but the progressdeadline is not used if we are outside of a rolout, see the K8s issues/ reproducer here. Now the revision shows consistently:
It also shows though:
I am confused by the fact that we report ready in case of failures. The Knative reconciler is not getting triggered because deployment does not change. For data/logs see the attached files: progress-deadline-10s-minscale-1.zip Also we have a run with minScale=0, progress-deadline-10s.zip, there the deployment status is the same but the revision status is stuck to:
My understanding is that this conditions for this scenario hides the problem of the initial replica never coming up, check next. In contrast with the above assume we have a healthy revision with minScale=0. When we have then first replica up & running, and before scaling down revision conditions are:
After revision is scaled down:
Final conditions for a healthy and non-healthy revision are the same in this case. |
In what area(s)?
/area autoscale
/area api
What version of Knative?
Any.
Expected Behavior
Afaik when a new revision is created the deployment replicas is set to 1 by default in order to scale and check if the revision can be healthy (see discussion here). A customer has a scenario as follows:
The UX should be much better and although we have a debugging guide listing a number of places of where to look for an error this is not really helping. First the error goes away and several resources look ready due to scaling down to zero.
I expected the deployment not to progress and get stuck with something similar to (or at least provide that option as a normal deployment would stabilize to that) or report it and then scale down:
Unfortunately by default the code here cannot capture this due to scaling to zero and this still occurs even if we lower the deadline time to a reasonable value such as
serving.knative.dev/progress-deadline: 30s
, (although it might depend on the app to set this properly which is a bit painful).I think we could increase
autoscaling.knative.dev/scale-down-delay
or set minScale to 1 or lower the request timeout but ideally I would expect Knative to report that ksvc is not healthy in this scenario although I understand that this is could be a transient error (image pull could have been fixed in the meant time). There is also some timeout here,activationTimeoutBuffer
, that could be configurable? I don't see any logs for the revision not being activated.Maybe we should mark it as unhealthy and when a new request comes in we could revaluate the ksvc status.
Actual Behavior
The behavior of the deployment is captured in the attached files, the number prefix on each filename represents time ordering. Everything seems to be fine although the deployment never reached the desired replicas.
Steps to Reproduce the Problem
Described above.
cc @evankanderson @dprotaso @mattmoor @psschwei
The text was updated successfully, but these errors were encountered: