From 2a500157d2f3e49206412214a38510ee34cde512 Mon Sep 17 00:00:00 2001 From: JakeSCahill Date: Wed, 15 Jan 2025 16:21:26 +0000 Subject: [PATCH 1/3] Update recommendations for Pod resource management The new advice is to rely on K8s resource requests and limits --- .../pages/kubernetes/k-manage-resources.adoc | 94 ++++++++++++------- 1 file changed, 59 insertions(+), 35 deletions(-) diff --git a/modules/manage/pages/kubernetes/k-manage-resources.adoc b/modules/manage/pages/kubernetes/k-manage-resources.adoc index cf1505a17..f14e09ca6 100644 --- a/modules/manage/pages/kubernetes/k-manage-resources.adoc +++ b/modules/manage/pages/kubernetes/k-manage-resources.adoc @@ -27,12 +27,32 @@ kubectl describe nodes [[memory]] == Configure memory resources -On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. You can configure the memory resources that are allocated to these processes. +On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. Proper configuration of memory resources ensures optimal performance and stability for your Redpanda deployment. -By default, the Helm chart allocates 80% of the configured memory in `resources.memory.container` to Redpanda, with the remaining reserved for overhead such as the Seastar subsystem and other container processes. -Redpanda Data recommends this default setting. +=== Memory allocation and Seastar flags -NOTE: Although you can also allocate the exact amount of memory for Redpanda and the Seastar subsystem manually, Redpanda Data does not recommend this approach because setting the wrong values can lead to performance issues, instability, or data loss. As a result, this approach is not documented here. +Redpanda uses the following Seastar flags to control memory allocation: + +[cols="1m,2a"] +|=== +|Seastar Flag|Description + +|--memory +|Specifies the memory available to the Redpanda process. This value directly impacts Redpanda's ability to manage workloads efficiently. + +|--reserve-memory +|Reserves a part of memory for system overheads such as non-heap memory, page tables, and other non-Redpanda operations. This flag is designed for Seastar running on a dedicated VM rather than inside a container. +|=== + +*Default (legacy) behavior*: By default, the Helm chart allocates 80% of the memory in `resources.memory.container` to `--memory` and reserves 20% for `--reserve-memory`. This is legacy behavior to maintain backward compatibility. Do not use this default in production. + +*Production recommendation*: Use `resources.requests.memory` for production deployments. This configuration: + +- Sets `--memory` to 90% of the requested memory. +- Fixes `--reserve-memory` at 0, as Kubernetes already manages container overhead using https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[resource requests and limits^]. This simplifies memory allocation and ensures predictable resource usage. +- Configures Kubernetes resource requests for memory, enabling Kubernetes to effectively schedule and enforce memory allocation for containers. + +CAUTION: Avoid manually setting Seastar flags unless absolutely necessary. Incorrect values can lead to performance issues, instability, or data loss. To manually set these flags, use xref:reference:k-redpanda-helm-spec.adoc#statefulset-additionalredpandacmdflags[`statefulset.additionalRedpandaCmdFlags`]. [tabs] ====== @@ -52,10 +72,9 @@ spec: resources: memory: enable_memory_locking: true <1> - container: - # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits) - # min: - max: <2> + requests: + # Allocates 90% to the --memory Seastar flag + memory: <2> ---- ```bash @@ -76,10 +95,9 @@ Helm:: resources: memory: enable_memory_locking: true <1> - container: - # If omitted, the `min` value is equal to the `max` value (requested resources defaults to limits) - # min: - max: <2> + requests: + # Allocates 90% to the --memory Seastar flag + memory: <2> ---- + ```bash @@ -92,15 +110,16 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ --set resources.memory.enable_memory_locking=true \ <1> - --set resources.memory.container.max= <2> + --set resources.requests.memory= <2> ``` ==== -- ====== -<1> For production, enable memory locking to prevent the operating system from paging out Redpanda's memory to disk, which can significantly impact performance. -<2> The amount of memory to give Redpanda, Seastar, and the other container processes. You should give Redpanda at least 2 Gi of memory per core. Given that the Helm chart allocates 80% of the container's memory to Redpanda, leaving the rest for the Seastar subsystem and other processes, set this value to at least 2.5 Gi per core to ensure Redpanda has a full 2 Gi. Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^]. +<1> Enabling memory locking prevents the operating system from paging out Redpanda's memory to disk. This can significantly improve performance by ensuring Redpanda has uninterrupted access to its allocated memory. + +<2> Allocate at least 2.5 Gi of memory per core to ensure Redpanda has the 2 Gi per core it requires after accounting for the 90% allocation to the `--memory` flag. Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^]. [[qos]] == Quality of service and resource guarantees @@ -129,12 +148,12 @@ spec: chartRef: {} clusterSpec: resources: - cpu: - cores: - memory: - container: - min: - max: + requests: + cpu: + memory: + limits: + cpu: # Matches the request + memory: # Matches the request statefulset: sideCars: configWatcher: @@ -188,12 +207,12 @@ Helm:: [,yaml] ---- resources: - cpu: - cores: - memory: - container: - min: - max: + requests: + cpu: + memory: + limits: + cpu: # Matches the request + memory: # Matches the request statefulset: sideCars: configWatcher: @@ -240,9 +259,10 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea + ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ - --set resources.cpu.cores= \ - --set resources.memory.container.min= \ - --set resources.memory.container.max= \ + --set resources.requests.cpu= \ + --set resources.limits.cpu= \ + --set resources.requests.memory= \ + --set resources.limits.memory= \ --set statefulset.sideCars.configWatcher.resources.requests.cpu= \ --set statefulset.sideCars.configWatcher.resources.requests.memory= \ --set statefulset.sideCars.configWatcher.resources.limits.cpu= \ @@ -285,6 +305,8 @@ If Redpanda runs in a shared environment, where multiple applications run on the You can enable overprovisioning by either setting the CPU request to a fractional value or setting `overprovisioned` to `true`. +NOTE: Setting `resources.requests.cpu` to a fractional value, such as 200m, enables Kubernetes to schedule Pods alongside other workloads efficiently, ensuring fair resource distribution. However, this may impact Redpanda's performance under heavy loads. + [tabs] ====== Helm + Operator:: @@ -301,8 +323,9 @@ spec: chartRef: {} clusterSpec: resources: + requests: + cpu: cpu: - cores: overprovisioned: true ---- @@ -322,8 +345,9 @@ Helm:: [,yaml] ---- resources: + requests: + cpu: cpu: - cores: overprovisioned: true ---- + @@ -336,7 +360,7 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea + ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ - --set resources.cpu.cores= \ + --set resources.requests.cpu= \ --set resources.cpu.overprovisioned=true ``` @@ -350,8 +374,8 @@ If you're experimenting with Redpanda in Kubernetes, you can also set the number [,yaml] ---- resources: - cpu: - cores: 200m + requests: + cpu: 200m ---- include::shared:partial$suggested-reading.adoc[] From b95b7921a3bc5dbd5c8b3b93202849deb6323c20 Mon Sep 17 00:00:00 2001 From: JakeSCahill Date: Tue, 21 Jan 2025 15:30:32 +0000 Subject: [PATCH 2/3] Apply suggestions --- .../pages/kubernetes/k-manage-resources.adoc | 20 +++++++++---------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/modules/manage/pages/kubernetes/k-manage-resources.adoc b/modules/manage/pages/kubernetes/k-manage-resources.adoc index f14e09ca6..d83bed502 100644 --- a/modules/manage/pages/kubernetes/k-manage-resources.adoc +++ b/modules/manage/pages/kubernetes/k-manage-resources.adoc @@ -27,7 +27,7 @@ kubectl describe nodes [[memory]] == Configure memory resources -On a worker node, Kubernetes and Redpanda processes are running at the same time, including the Seastar subsystem that is built into the Redpanda binary. Each of these processes consumes memory. Proper configuration of memory resources ensures optimal performance and stability for your Redpanda deployment. +On a worker node, Kubernetes and Redpanda processes are running at the same time. Redpanda's memory usage is influenced by its architecture, which leverages the Seastar framework for efficient performance. === Memory allocation and Seastar flags @@ -52,7 +52,7 @@ Redpanda uses the following Seastar flags to control memory allocation: - Fixes `--reserve-memory` at 0, as Kubernetes already manages container overhead using https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[resource requests and limits^]. This simplifies memory allocation and ensures predictable resource usage. - Configures Kubernetes resource requests for memory, enabling Kubernetes to effectively schedule and enforce memory allocation for containers. -CAUTION: Avoid manually setting Seastar flags unless absolutely necessary. Incorrect values can lead to performance issues, instability, or data loss. To manually set these flags, use xref:reference:k-redpanda-helm-spec.adoc#statefulset-additionalredpandacmdflags[`statefulset.additionalRedpandaCmdFlags`]. +CAUTION: Avoid manually setting Seastar flags unless absolutely necessary. Incorrect values can lead to performance issues, instability, or data loss. If you need to set these flags, use xref:reference:k-redpanda-helm-spec.adoc#statefulset-additionalredpandacmdflags[`statefulset.additionalRedpandaCmdFlags`]. [tabs] ====== @@ -303,9 +303,9 @@ If you use PersistentVolumes, you can set the storage capacity for each volume. If Redpanda runs in a shared environment, where multiple applications run on the same worker node, you can make Redpanda less aggressive in CPU usage by enabling overprovisioning. This adjustment ensures a fairer distribution of CPU time among all processes, improving overall system efficiency at the cost of Redpanda's performance. -You can enable overprovisioning by either setting the CPU request to a fractional value or setting `overprovisioned` to `true`. +You can enable overprovisioning by either setting the CPU request to a fractional value or setting `resources.cpu.overprovisioned` to `true`. -NOTE: Setting `resources.requests.cpu` to a fractional value, such as 200m, enables Kubernetes to schedule Pods alongside other workloads efficiently, ensuring fair resource distribution. However, this may impact Redpanda's performance under heavy loads. +NOTE: When `resources.requests` or `resources.limits` are set, the `resources.cpu` parameter (including cores) is ignored. Ensure that you have not configured CPU requests and limits explicitly to avoid unexpected behavior in shared environments. [tabs] ====== @@ -323,9 +323,8 @@ spec: chartRef: {} clusterSpec: resources: - requests: - cpu: cpu: + cores: overprovisioned: true ---- @@ -345,9 +344,8 @@ Helm:: [,yaml] ---- resources: - requests: - cpu: cpu: + cores: overprovisioned: true ---- + @@ -360,7 +358,7 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea + ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ - --set resources.requests.cpu= \ + --set resources.cpu.cores= \ --set resources.cpu.overprovisioned=true ``` @@ -374,8 +372,8 @@ If you're experimenting with Redpanda in Kubernetes, you can also set the number [,yaml] ---- resources: - requests: - cpu: 200m + cpu: + cores: 200m ---- include::shared:partial$suggested-reading.adoc[] From 867c9aae023515726a3b425cd0cfe6acb253b2ee Mon Sep 17 00:00:00 2001 From: JakeSCahill Date: Fri, 24 Jan 2025 17:09:45 +0000 Subject: [PATCH 3/3] Apply suggestions from review --- .../pages/kubernetes/k-manage-resources.adoc | 34 ++++++++++++------- 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/modules/manage/pages/kubernetes/k-manage-resources.adoc b/modules/manage/pages/kubernetes/k-manage-resources.adoc index d83bed502..6408613b9 100644 --- a/modules/manage/pages/kubernetes/k-manage-resources.adoc +++ b/modules/manage/pages/kubernetes/k-manage-resources.adoc @@ -52,7 +52,7 @@ Redpanda uses the following Seastar flags to control memory allocation: - Fixes `--reserve-memory` at 0, as Kubernetes already manages container overhead using https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[resource requests and limits^]. This simplifies memory allocation and ensures predictable resource usage. - Configures Kubernetes resource requests for memory, enabling Kubernetes to effectively schedule and enforce memory allocation for containers. -CAUTION: Avoid manually setting Seastar flags unless absolutely necessary. Incorrect values can lead to performance issues, instability, or data loss. If you need to set these flags, use xref:reference:k-redpanda-helm-spec.adoc#statefulset-additionalredpandacmdflags[`statefulset.additionalRedpandaCmdFlags`]. +CAUTION: Avoid manually setting the `--memory` and `--reserve-memory` flags unless absolutely necessary. Incorrect values can lead to performance issues, instability, or data loss. [tabs] ====== @@ -69,9 +69,10 @@ metadata: spec: chartRef: {} clusterSpec: + statefulset: + additionalRedpandaCmdFlags: + - '--lock-memory' <1> resources: - memory: - enable_memory_locking: true <1> requests: # Allocates 90% to the --memory Seastar flag memory: <2> @@ -92,9 +93,10 @@ Helm:: .`memory.yaml` [,yaml] ---- +statefulset: + additionalRedpandaCmdFlags: + - '--lock-memory' <1> resources: - memory: - enable_memory_locking: true <1> requests: # Allocates 90% to the --memory Seastar flag memory: <2> @@ -109,7 +111,7 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea + ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ - --set resources.memory.enable_memory_locking=true \ <1> + --set statefulset.additionalRedpandaCmdFlags=="{--lock-memory}" \ <1> --set resources.requests.memory= <2> ``` @@ -119,7 +121,11 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea <1> Enabling memory locking prevents the operating system from paging out Redpanda's memory to disk. This can significantly improve performance by ensuring Redpanda has uninterrupted access to its allocated memory. -<2> Allocate at least 2.5 Gi of memory per core to ensure Redpanda has the 2 Gi per core it requires after accounting for the 90% allocation to the `--memory` flag. Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. Memory units are converted to the nearest whole MiB. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^]. +<2> Allocate at least 2.22 Gi of memory per core to ensure Redpanda has the 2 Gi per core it requires after accounting for the 90% allocation to the `--memory` flag. ++ +Redpanda supports the following memory resource units: B, K, M, G, Ki, Mi, and Gi. ++ +Memory units are truncated to the nearest whole MiB. For example, a memory request of 1024 KiB will result in 1 MiB being allocated. For a description of memory resource units, see the https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory[Kubernetes documentation^]. [[qos]] == Quality of service and resource guarantees @@ -303,9 +309,9 @@ If you use PersistentVolumes, you can set the storage capacity for each volume. If Redpanda runs in a shared environment, where multiple applications run on the same worker node, you can make Redpanda less aggressive in CPU usage by enabling overprovisioning. This adjustment ensures a fairer distribution of CPU time among all processes, improving overall system efficiency at the cost of Redpanda's performance. -You can enable overprovisioning by either setting the CPU request to a fractional value or setting `resources.cpu.overprovisioned` to `true`. +You can enable overprovisioning by either setting the CPU request to a fractional value less than 1 or enabling the `--overprovisioned` flag. -NOTE: When `resources.requests` or `resources.limits` are set, the `resources.cpu` parameter (including cores) is ignored. Ensure that you have not configured CPU requests and limits explicitly to avoid unexpected behavior in shared environments. +NOTE: You cannot enable overprovisioning when both `resources.requests` and `resources.limits` are set. When both of these configurations are set, the `resources.cpu` parameter (including cores) is ignored. [tabs] ====== @@ -322,10 +328,12 @@ metadata: spec: chartRef: {} clusterSpec: + statefulset: + additionalRedpandaCmdFlags: + - '--overprovisioned' resources: cpu: cores: - overprovisioned: true ---- ```bash @@ -343,10 +351,12 @@ Helm:: .`cpu-cores-overprovisioned.yaml` [,yaml] ---- +statefulset: + additionalRedpandaCmdFlags: + - '--overprovisioned' resources: cpu: cores: - overprovisioned: true ---- + ```bash @@ -359,7 +369,7 @@ helm upgrade --install redpanda redpanda/redpanda --namespace --crea ```bash helm upgrade --install redpanda redpanda/redpanda --namespace --create-namespace \ --set resources.cpu.cores= \ - --set resources.cpu.overprovisioned=true + --set statefulset.additionalRedpandaCmdFlags=="{--overprovisioned}" ``` ====