You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The proposal is to detect image size before setting requests and limits for a scan job.
Currently the requests and limits are hardcoded and it makes it difficult to guess target resource requests required to schedule a pod when image sizes vary from MB to GB
If requests and limits are too low, then the pod will be OOMKIlled
If requests and limits are too high it is a waste of resources, in dynamic environment e.g. with use of Karpenter it will also use bigger nodes which cost more.
From my current understanding scan pod will require roughly the same amount of memory as the image it scans. Setting requests and limits a bit higher than the image size will help to optimize resource consumption.
The text was updated successfully, but these errors were encountered:
The proposal is to detect image size before setting requests and limits for a scan job.
Currently the requests and limits are hardcoded and it makes it difficult to guess target resource requests required to schedule a pod when image sizes vary from MB to GB
If requests and limits are too low, then the pod will be
OOMKIlled
If requests and limits are too high it is a waste of resources, in dynamic environment e.g. with use of Karpenter it will also use bigger nodes which cost more.
From my current understanding scan pod will require roughly the same amount of memory as the image it scans. Setting requests and limits a bit higher than the image size will help to optimize resource consumption.
The text was updated successfully, but these errors were encountered: