-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the ability to create a horizontal pod autoscaler to the helm chart #1384
Comments
@josh-ferrell have you tried running multiple replicas with a PDB to solve this issue? I'm not sure that a HPA would give you the responsiveness required for this pattern, but even more essentially MS would be the source of the metrics to trigger the HPA so I don't think this would work. Have I missed something? |
@stevehipwell We currently run multiple replicas with the PDB. The intention behind the HPA was to autoscale MS before responses would ever slow to the point of becoming a problem since it represented a point of failure for all HPAs. |
Have you tested the add-on resizer for your use case? |
AFAIK add-on resizer will work like the VPA but scaling based on cluster size. My theory was that a HPA would ensure client side calls did not increase metrics-server response latency beyond a certain point since they are being spread across the metrics-server service. I haven't pushed it to that limit yet with scale testing, but we do observe MS CPU utilization correlating with the rate of client side calls. |
@josh-ferrell check out the HA project docs, that should cover your use case if combined with a PDB and a |
/triage accepted |
What would you like to be added:
The ability to create a horizontal pod autoscaler via helm chart values.
Why is this needed:
Autoscaling the metrics-server alleviates performance degradation in other applications that could occur when the HPA controller isn't able to receive a timely response from the metrics-server to make determinations about scale requirements.
/kind feature
The text was updated successfully, but these errors were encountered: