Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added memory requests to be more realistic.
This makes Kubernetes make better choices about where to schedule the pods, and communicates to the administrators about the minimum sensible resource requirements. On a single user Mastodon instance on a three node Kubernetes after a week of so use we get these memory uses per pod: ``` tero@arcones:~$ kubectl top pods -n mastodon NAME CPU(cores) MEMORY(bytes) mastodon-elasticsearch-coordinating-0 6m 403Mi mastodon-elasticsearch-coordinating-1 28m 189Mi mastodon-elasticsearch-data-0 10m 1432Mi mastodon-elasticsearch-data-1 5m 1513Mi mastodon-elasticsearch-ingest-0 6m 418Mi mastodon-elasticsearch-ingest-1 6m 396Mi mastodon-elasticsearch-master-0 24m 466Mi mastodon-elasticsearch-master-1 10m 221Mi mastodon-postgresql-0 12m 276Mi mastodon-redis-master-0 16m 37Mi mastodon-redis-replicas-0 7m 34Mi mastodon-sidekiq-all-queues-549b4bb7b4-zvj2m 266m 499Mi mastodon-streaming-78465f778d-6xfg2 1m 96Mi mastodon-web-774c5c94f9-f5bhz 22m 418Mi ``` Hence we make the following adjustments to Bitnami defaults: - `mastodon-elasticsearch-coordinating`: `256Mi->512Mi` - `mastodon-elasticsearch-data`: The default `2048Mi` is ok. - `mastodon-elasticsearch-master`: `256Mi->512Mi` - `mastodon-redis-master`: `0->56Mi` - `mastodon-redis-replicas`: `0->56Mi` - `mastodon-postgresql`: `256->384Mi` And for Mastodon defaults: - `mastodon-sidekiq-all-queues`: `0->512Mi` - `mastodon-streaming`: `0->128Mi` - `mastodon-web`: `0->512Mi` The original idea of keeping these requests zero is a good default when minimal requirements are unknown. However, from a single user node we get minimal requirements and having the limits as zero only leads to trouble for people. Of course the system requirements will change over time, but they are chiefly expected to go upwards.
- Loading branch information