diff --git a/local-antora-playbook.yml b/local-antora-playbook.yml index e8ae271e5..c00b0512e 100644 --- a/local-antora-playbook.yml +++ b/local-antora-playbook.yml @@ -49,6 +49,14 @@ antora: filter: docker-compose env_type: Docker attribute_name: docker-labs-index + - require: '@sntke/antora-mermaid-extension' + mermaid_library_url: https://cdn.jsdelivr.net/npm/mermaid@10/dist/mermaid.esm.min.mjs + script_stem: mermaid-scripts + mermaid_initialize_options: + start_on_load: true + theme: base + theme_variables: + line_color: '#e2401b' - require: '@redpanda-data/docs-extensions-and-macros/extensions/collect-bloblang-samples' - require: '@redpanda-data/docs-extensions-and-macros/extensions/generate-rp-connect-categories' - require: '@redpanda-data/docs-extensions-and-macros/extensions/modify-redirects' diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 12c573b8a..affa9a6ee 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -133,6 +133,7 @@ *** xref:manage:kubernetes/k-remote-read-replicas.adoc[Remote Read Replicas] *** xref:manage:kubernetes/k-manage-resources.adoc[Manage Pod Resources] *** xref:manage:kubernetes/k-scale-redpanda.adoc[Scale] +*** xref:manage:kubernetes/k-nodewatcher.adoc[] *** xref:manage:kubernetes/k-decommission-brokers.adoc[Decommission Brokers] *** xref:manage:kubernetes/k-recovery-mode.adoc[Recovery Mode] *** xref:manage:kubernetes/monitoring/index.adoc[Monitor] diff --git a/modules/manage/pages/kubernetes/k-decommission-brokers.adoc b/modules/manage/pages/kubernetes/k-decommission-brokers.adoc index bf2d62d04..2570b29d6 100644 --- a/modules/manage/pages/kubernetes/k-decommission-brokers.adoc +++ b/modules/manage/pages/kubernetes/k-decommission-brokers.adoc @@ -1,12 +1,14 @@ = Decommission Brokers in Kubernetes -:description: Remove a broker so that it is no longer considered part of the cluster. +:description: Remove a Redpanda broker from the cluster without risking data loss or causing instability. :page-context-links: [{"name": "Linux", "to": "manage:cluster-maintenance/decommission-brokers.adoc" },{"name": "Kubernetes", "to": "manage:kubernetes/k-decommission-brokers.adoc" } ] :tags: ["Kubernetes"] :page-aliases: manage:kubernetes/decommission-brokers.adoc :page-categories: Management :env-kubernetes: true -When you decommission a broker, its partition replicas are reallocated across the remaining brokers and it is removed from the cluster. You may want to decommission a broker in the following circumstances: +Decommissioning a broker is the *safe and controlled* way to remove a Redpanda broker from the cluster without risking data loss or causing instability. By decommissioning, you ensure that partition replicas are reallocated across the remaining brokers so that you can then safely shut down the broker. + +You may want to decommission a broker in the following situations: * You are removing a broker to decrease the size of the cluster, also known as scaling down. * The broker has lost its storage and you need a new broker with a new node ID (broker ID). @@ -222,15 +224,204 @@ So the primary limitation consideration is the replication factor of five, meani To decommission a broker, you can use one of the following methods: -- <>: Use the Decommission controller to automatically decommission brokers whenever you reduce the number of StatefulSet replicas. - <>: Use `rpk` to decommission one broker at a time. +- <>: Use the Decommission controller to automatically decommission brokers whenever you reduce the number of StatefulSet replicas. + +[[Manual]] +=== Manually decommission a broker + +Follow this workflow to manually decommission a broker before reducing the number of StatefulSet replicas: + +[mermaid] +.... +flowchart TB + %% Define classes + classDef userAction stroke:#374D7C, fill:#E2EBFF, font-weight:bold,rx:5,ry:5 + + A[Start Manual Scale-In]:::userAction --> B["Identify Broker to Remove
(Highest Pod Ordinal)"]:::userAction + B --> C[Decommission Broker Running on Pod with Highest Ordinal]:::userAction + C --> D[Monitor Decommission Status]:::userAction + D --> E{Is Broker Removed?}:::userAction + E -- No --> D + E -- Yes --> F[Decrease StatefulSet Replicas by 1]:::userAction + F --> G[Wait for Rolling Update and Cluster Health]:::userAction + G --> H{More Brokers to Remove?}:::userAction + H -- Yes --> B + H -- No --> I[Done]:::userAction +.... + +. List your brokers and their associated broker IDs: ++ +```bash +kubectl --namespace exec -ti redpanda-0 -c redpanda -- \ + rpk cluster info +``` ++ +.Example output +[%collapsible] +==== +``` +CLUSTER +======= +redpanda.560e2403-3fd6-448c-b720-7b456d0aa78c + +BROKERS +======= +ID HOST PORT RACK +0 redpanda-0.testcluster.local 32180 A +1 redpanda-1.testcluster.local 32180 A +4 redpanda-3.testcluster.local 32180 B +5* redpanda-2.testcluster.local 32180 B +6 redpanda-4.testcluster.local 32180 C +8 redpanda-6.testcluster.local 32180 C +9 redpanda-5.testcluster.local 32180 D +``` +==== ++ +The output shows that the IDs don't match the StatefulSet ordinal, which appears in the hostname. In this example, two brokers will be decommissioned: `redpanda-6` (ID 8) and `redpanda-5` (ID 9). ++ +NOTE: When scaling in a cluster, you cannot choose which broker is removed. Redpanda is deployed as a StatefulSet in Kubernetes. The StatefulSet controls which Pods are destroyed and always starts with the Pod that has the highest ordinal. So the first broker to be removed when updating the StatefulSet in this example is `redpanda-6` (ID 8). + +. Decommission the broker with the highest Pod ordinal: ++ +```bash +kubectl --namespace exec -ti -c -- \ + rpk redpanda admin brokers decommission +``` ++ +This message is displayed before the decommission process is complete. ++ +```bash +Success, broker has been decommissioned! +``` ++ +TIP: If the broker is not running, use the `--force` flag. + +. Monitor the decommissioning status: ++ +```bash +kubectl --namespace exec -ti -c -- \ + rpk redpanda admin brokers decommission-status +``` ++ +The output uses cached cluster health data that is refreshed every 10 seconds. When the completion column for all rows is 100%, the broker is decommissioned. ++ +Another way to verify decommission is complete is by running the following command: ++ +```bash +kubectl --namespace exec -ti -c -- \ + rpk cluster health +``` ++ +Be sure to verify that the decommissioned broker's ID does not appear in the list of IDs. In this example, ID 9 is missing, which means the decommission is complete. ++ +``` +CLUSTER HEALTH OVERVIEW +======================= +Healthy: true +Controller ID: 0 +All nodes: [4 1 0 5 6 8] +Nodes down: [] +Leaderless partitions: [] +Under-replicated partitions: [] +``` + +. Decrease the number of replicas *by one* to remove the Pod with the highest ordinal (the one you just decommissioned). ++ +:caution-caption: Reduce replicas by one +[CAUTION] +==== +When scaling in (removing brokers), remove only one broker at a time. If you reduce the StatefulSet replicas by more than one, Kubernetes can terminate multiple Pods simultaneously, causing quorum loss and cluster unavailability. +==== +:caution-caption: Caution ++ +[tabs] +====== +Helm + Operator:: ++ +-- +.`redpanda-cluster.yaml` +[,yaml] +---- +apiVersion: cluster.redpanda.com/v1alpha2 +kind: Redpanda +metadata: + name: redpanda +spec: + chartRef: {} + clusterSpec: + statefulset: + replicas: +---- + +Apply the Redpanda resource: + +```bash +kubectl apply -f redpanda-cluster.yaml --namespace +``` + +-- +Helm:: ++ +-- + +[tabs] +==== +--values:: ++ +.`decommission.yaml` +[,yaml] +---- +statefulset: + replicas: +---- + +--set:: ++ +[,bash] +---- +helm upgrade redpanda redpanda/redpanda --namespace --wait --reuse-values --set statefulset.replicas= +---- +==== +-- +====== ++ +This process triggers a rolling restart of each Pod so that each broker has an up-to-date `seed_servers` configuration to reflect the new list of brokers. -This example shows how to scale a cluster from seven brokers to five brokers. +You can repeat this procedure to continue to scale down. [[Automated]] === Use the Decommission controller -The Decommission controller is responsible for monitoring the StatefulSet for changes in the number replicas. When the number of replicas is reduced, the controller decommissions brokers, starting from the highest Pod ordinal, until the number of brokers matches the number of replicas. For example, you have a Redpanda cluster with the following brokers: +The Decommission controller is responsible for monitoring the StatefulSet for changes in the number replicas. When the number of replicas is reduced, the controller decommissions brokers, starting from the highest Pod ordinal, until the number of brokers matches the number of replicas. + +[mermaid] +.... +flowchart TB + %% Define classes + classDef userAction stroke:#374D7C, fill:#E2EBFF, font-weight:bold,rx:5,ry:5 + classDef systemAction fill:#F6FBF6,stroke:#25855a,stroke-width:2px,color:#20293c,rx:5,ry:5 + + %% Main workflow + A[Start Automated Scale-In]:::userAction --> B[Decrease StatefulSet
Replicas by 1]:::userAction + B --> C[Decommission Controller
Detects Reduced Replicas]:::systemEvent + C --> D[Controller Marks
Highest Ordinal Pod for Removal]:::systemEvent + D --> E[Controller Orchestrates
Broker Decommission]:::systemEvent + E --> F[Partitions Reallocate
Under Controller Supervision]:::systemEvent + F --> G[Check Cluster Health]:::systemEvent + G --> H{Broker Fully Removed?}:::systemEvent + H -- No --> F + H -- Yes --> I[Done,
or Repeat if Further Scale-In Needed]:::userAction + + %% Legend + subgraph Legend + direction TB + UA([User Action]):::userAction + SE([System Event]):::systemEvent + end +.... + +For example, you have a Redpanda cluster with the following brokers: [.no-copy] ---- @@ -402,7 +593,14 @@ helm upgrade --install redpanda redpanda/redpanda \ kubectl exec redpanda-0 --namespace -- rpk cluster health ``` -. Decrease the number of replicas by one: +. Decrease the number of replicas *by one*. ++ +:caution-caption: Reduce replicas by one +[CAUTION] +==== +When scaling in (removing brokers), remove only one broker at a time. If you reduce the StatefulSet replicas by more than one, Kubernetes can terminate multiple Pods simultaneously, causing quorum loss and cluster unavailability. +==== +:caution-caption: Caution + [tabs] ====== @@ -493,104 +691,7 @@ If you're running the Decommission controller as a sidecar: kubectl logs --namespace -c redpanda-controllers ---- -You can repeat this procedure to scale down to 5 brokers. - -[[Manual]] -=== Manually decommission a broker - -If you don't want to use the <>, follow these steps to manually decommission a broker before reducing the number of StatefulSet replicas: - -. List your brokers and their associated broker IDs: -+ -```bash -kubectl --namespace exec -ti redpanda-0 -c redpanda -- \ - rpk cluster info -``` -+ -.Example output -[%collapsible] -==== -``` -CLUSTER -======= -redpanda.560e2403-3fd6-448c-b720-7b456d0aa78c - -BROKERS -======= -ID HOST PORT RACK -0 redpanda-0.testcluster.local 32180 A -1 redpanda-1.testcluster.local 32180 A -4 redpanda-3.testcluster.local 32180 B -5* redpanda-2.testcluster.local 32180 B -6 redpanda-4.testcluster.local 32180 C -8 redpanda-6.testcluster.local 32180 C -9 redpanda-5.testcluster.local 32180 D -``` -==== -+ -The output shows that the IDs don't match the StatefulSet ordinal, which appears in the hostname. In this example, two brokers will be decommissioned: `redpanda-6` (ID 8) and `redpanda-5` (ID 9). -+ -NOTE: When scaling in a cluster, you cannot choose which broker is decommissioned. Redpanda is deployed as a StatefulSet in Kubernetes. The StatefulSet controls which Pods are destroyed and always starts with the Pod that has the highest ordinal. So the first broker to be destroyed when updating the StatefulSet in this example is `redpanda-6` (ID 8). - -. Decommission the broker with your selected broker ID: -+ -```bash -kubectl --namespace exec -ti -c -- \ - rpk redpanda admin brokers decommission -``` -+ -This message is displayed before the decommission process is complete. -+ -``` -Success, broker has been decommissioned! -``` -+ -TIP: If the broker is not running, use the `--force` flag. - -. Monitor the decommissioning status: -+ -```bash -kubectl --namespace exec -ti -c -- \ - rpk redpanda admin brokers decommission-status -``` -+ -The output uses cached cluster health data that is refreshed every 10 seconds. When the completion column for all rows is 100%, the broker is decommissioned. -+ -Another way to verify decommission is complete is by running the following command: -+ -```bash -kubectl --namespace exec -ti -c -- \ - rpk cluster health -``` -+ -Be sure to verify that the decommissioned broker's ID does not appear in the list of IDs. In this example, ID 9 is missing, which means the decommission is complete. -+ -``` -CLUSTER HEALTH OVERVIEW -======================= -Healthy: true -Controller ID: 0 -All nodes: [4 1 0 5 6 8] -Nodes down: [] -Leaderless partitions: [] -Under-replicated partitions: [] -``` - -. Decommission any other brokers. -+ -After decommissioning one broker and verifying that the process is complete, continue decommissioning another broker by repeating the previous two steps. -+ -NOTE: Be sure to take into account everything in <>, and that you have verified that your cluster and use cases will not be negatively impacted by losing brokers. - -. Update the StatefulSet replica value. -+ -The last step is to update the StatefulSet replica value to reflect the new broker count. In this example the count was updated to five. If you deployed with the Helm chart, then run following command: -+ -```bash -helm upgrade redpanda redpanda/redpanda --namespace --wait --reuse-values --set statefulset.replicas=5 -``` -+ -This process triggers a rolling restart of each Pod so that each broker has an up-to-date `seed_servers` configuration to reflect the new list of brokers. +You can repeat this procedure to continue to scale down. == Troubleshooting diff --git a/modules/manage/pages/kubernetes/k-nodewatcher.adoc b/modules/manage/pages/kubernetes/k-nodewatcher.adoc new file mode 100644 index 000000000..d24a8e06e --- /dev/null +++ b/modules/manage/pages/kubernetes/k-nodewatcher.adoc @@ -0,0 +1,211 @@ += Install the Nodewatcher Controller +:page-categories: Management +:env-kubernetes: true +:description: pass:q[The Nodewatcher controller is an emergency backstop for Redpanda clusters that use PersistentVolumes (PVs) for the Redpanda data directory. When a node running a Redpanda Pod suddenly goes offline, Nodewatcher detects the lost node, retains the associated PV, and removes the corresponding PersistentVolumeClaim (PVC). This workflow allows the Redpanda Pod to be rescheduled on a new node without losing critical data.] + +{description} + +:warning-caption: Emergency use only + +[WARNING] +==== +The Nodewatcher controller is intended only for emergency scenarios (for example, node hardware or infrastructure failures). *Never use the Nodewatcher controller as a routine method for removing brokers.* If you want to remove brokers, see xref:manage:kubernetes/k-decommission-brokers.adoc[Decommission brokers] for the correct procedure. +==== + +:warning-caption: Warning + +== Why use Nodewatcher? + +If a worker node hosting a Redpanda Pod suddenly fails or disappears, Kubernetes might leave the associated PV and PVC in an _attached_ or _in-use_ state. Without Nodewatcher (or manual intervention), the Redpanda Pod cannot safely reschedule to another node because the volume is still recognized as occupied. Also, the default reclaim policy might delete the volume, risking data loss. Nodewatcher automates the steps needed to retain the volume and remove the stale PVC, so Redpanda Pods can move to healthy nodes without losing the data in the original PV. + +== How Nodewatcher works + +When the controller detects events that indicate a Node resource is no longer available, it does the following: + +- For each Redpanda Pod on that Node, it identifies the PVC (if any) the Pod was using for its storage. +- It sets the reclaim policy of the affected PersistentVolume (PV) to `Retain`. +- It deletes the associated PersistentVolumeClaim (PVC) to allows the Redpanda broker Pod to reschedule onto a new, operational node. + +[mermaid] +.... +flowchart TB + %% Define classes + classDef systemAction fill:#F6FBF6,stroke:#25855a,stroke-width:2px,color:#20293c,rx:5,ry:5 + + A[Node Fails] --> B{Is Node
Running Redpanda?}:::systemAction + B -- Yes --> C[Identify Redpanda Pod PVC]:::systemAction + C --> D[Set PV reclaim policy to 'Retain']:::systemAction + D --> E[Delete PVC]:::systemAction + E --> F[Redpanda Pod
is rescheduled]:::systemAction + B -- No --> G[Ignore event]:::systemAction +.... + +== Prerequisites + +- An existing Redpanda cluster in Kubernetes. +- Sufficient RBAC permissions for Nodewatcher to read and modify PVs, PVCs, and Node resources. + +== Install Nodewatcher + +[tabs] +====== +Helm + Operator:: ++ +-- + +You can install the Nodewatcher controller as part of the Redpanda Operator or as a sidecar on each Pod that runs a Redpanda broker. When you install the controller as part of the Redpanda Operator, the controller monitors all Redpanda clusters running in the same namespace as the Redpanda Operator. If you want the controller to manage only a single Redpanda cluster, install it as a sidecar on each Pod that runs a Redpanda broker, using the Redpanda resource. + +To install the Nodewatcher controller as part of the Redpanda Operator: + +. Deploy the Redpanda Operator with the Nodewatcher controller: ++ +[,bash,subs="attributes+",lines=7+8] +---- +helm repo add redpanda https://charts.redpanda.com +helm repo update +helm upgrade --install redpanda-controller redpanda/operator \ + --namespace \ + --set image.tag={latest-operator-version} \ + --create-namespace \ + --set additionalCmdFlags={--additional-controllers="nodeWatcher"} \ + --set rbac.createAdditionalControllerCRs=true +---- ++ +- `--additional-controllers="nodeWatcher"`: Enables the Nodewatcher controller. +- `rbac.createAdditionalControllerCRs=true`: Creates the required RBAC rules for the Redpanda Operator to monitor the Node resources and update PVCs and PVs. + +. Deploy a Redpanda resource: ++ +.`redpanda-cluster.yaml` +[,yaml] +---- +apiVersion: cluster.redpanda.com/v1alpha2 +kind: Redpanda +metadata: + name: redpanda +spec: + chartRef: {} + clusterSpec: {} +---- ++ +```bash +kubectl apply -f redpanda-cluster.yaml --namespace +``` + +To install the Decommission controller as a sidecar: + +.`redpanda-cluster.yaml` +[,yaml,lines=11+13+15] +---- +apiVersion: cluster.redpanda.com/v1alpha2 +kind: Redpanda +metadata: + name: redpanda +spec: + chartRef: {} + clusterSpec: + statefulset: + sideCars: + controllers: + enabled: true + run: + - "nodeWatcher" + rbac: + enabled: true +---- + +- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. +- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. +- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. + +-- +Helm:: ++ +-- +[tabs] +==== +--values:: ++ +.`decommission-controller.yaml` +[,yaml,lines=4+6+8] +---- +statefulset: + sideCars: + controllers: + enabled: true + run: + - "nodeWatcher" +rbac: + enabled: true +---- ++ +- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. +- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. +- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. + +--set:: ++ +[,bash,lines=4-6] +---- +helm upgrade --install redpanda redpanda/redpanda \ + --namespace \ + --create-namespace \ + --set statefulset.sideCars.controllers.enabled=true \ + --set statefulset.sideCars.controllers.run={"nodeWatcher"} \ + --set rbac.enabled=true +---- ++ +- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. +- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. +- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. + +==== +-- +====== + +== Test the Nodewatcher controller + +. Test the Nodewatcher controller by deleting a Node resource: ++ +[,bash] +---- +kubectl delete node +---- ++ +NOTE: This step is for testing purposes only. + +. Monitor the logs of the Nodewatcher controller: ++ +-- +- If you're running the Nodewatcher controller as part of the Redpanda Operator: ++ +[,bash] +---- +kubectl logs -l app.kubernetes.io/name=operator -c manager --namespace +---- + +- If you're running the Nodewatcher controller as a sidecar: ++ +[,bash] +---- +kubectl logs --namespace -c redpanda-controllers +---- +-- ++ +You should see that the controller successfully deleted the PVC of the Pod that was running on the deleted Node resource. ++ +[,bash] +---- +kubectl get persistentvolumeclaim --namespace +---- + +. Verify that the reclaim policy of the PV is set to `Retain` to allow you to recover the node, if necessary: ++ +[,bash] +---- +kubectl get persistentvolume --namespace +---- + +After the Nodewatcher controller has finished, xref:manage:kubernetes/k-decommission-brokers.adoc[decommission the broker] that was removed from the node. This is necessary to prevent a potential loss of quorum and ensure cluster stability. + +NOTE: Make sure to use the `--force` flag when decommissioning the broker with xref:reference:rpk/rpk-redpanda/rpk-redpanda-admin-brokers-decommission.adoc[`rpk redpanda admin brokers decommission`]. This flag is required when the broker is no longer running. \ No newline at end of file diff --git a/modules/manage/pages/kubernetes/k-scale-redpanda.adoc b/modules/manage/pages/kubernetes/k-scale-redpanda.adoc index 28df81151..608dc9a9b 100644 --- a/modules/manage/pages/kubernetes/k-scale-redpanda.adoc +++ b/modules/manage/pages/kubernetes/k-scale-redpanda.adoc @@ -21,13 +21,26 @@ If your existing worker nodes have either too many resources or not enough resou - Deleting the Pod's PersistentVolumeClaim (PVC). - Ensuring that the PersistentVolume's (PV) reclaim policy is set to `Retain` to make sure that you can roll back to the original worker node without losing data. -As an emergency backstop, the <> can automate the deletion of PVCs and set the reclaim policy of PVs to `Retain`. +TIP: For emergency scenarios in which a node unexpectedly fails or is decommissioned without warning, the Nodewatcher controller can help protect your Redpanda data. For details, see xref:manage:kubernetes/k-nodewatcher.adoc[]. == Horizontal scaling Horizontal scaling involves modifying the number of brokers in your cluster, either by adding new ones (scaling out) or removing existing ones (scaling in). In situations where the workload is variable, horizontal scaling allows for flexibility. You can scale out when demand is high and scale in when demand is low, optimizing resource usage and cost. -CAUTION: Redpanda does not support Kubernetes autoscalers. Autoscalers rely on CPU and memory metrics for scaling decisions, which do not fully capture the complexities involved in scaling Redpanda clusters. Improper scaling can lead to operational challenges. Always manually scale your Redpanda clusters as described in this topic. +:caution-caption: Do not use autoscalers + +CAUTION: Redpanda does not support Kubernetes autoscalers. Autoscalers rely on CPU and memory metrics for scaling decisions, which do not fully capture the complexities involved in scaling Redpanda clusters. Always manually scale your Redpanda clusters as described in this topic. + +:caution-caption: Caution + +While you should not rely on Kubernetes autoscalers to scale your Redpanda brokers, you can prevent infrastructure-level autoscalers like Karpenter from terminating nodes that run Redpanda Pods. For example, you can set the xref:reference:k-redpanda-helm-spec.adoc#statefulset-podtemplate-annotations[`statefulset.podTemplate.annotations`] field in the Redpanda Helm values, or the xref:reference:k-crd.adoc#k8s-api-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-podtemplate[`statefulset.podTemplate.annotations`] field in the Redpanda custom resource to include: + +[,yaml] +---- +karpenter.sh/do-not-disrupt: "true" +---- + +This annotation tells Karpenter not to disrupt the node on which the annotated Pod is running. This can help protect Redpanda brokers from unexpected shutdowns in environments that use Karpenter to manage infrastructure nodes. === Scale out @@ -119,184 +132,7 @@ kubectl exec redpanda-0 --namespace -- rpk cluster health Scaling in is the process of removing brokers from your Redpanda cluster. You may want to remove brokers for cost reduction and resource optimization. -To scale in a Redpanda cluster, you must decommission the brokers that you want to remove before updating the `statefulset.replica` setting in the Helm values. See xref:manage:kubernetes/k-decommission-brokers.adoc[]. - -[[node-pvc]] -== Install the Nodewatcher controller - -The Nodewatcher controller maintains cluster operation during node failures by managing the lifecycle of PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) for Redpanda clusters. When the controller detects that a Node resource is not available, it sets the reclaim policy of the PV to `Retain`, helping to prevent data loss. Concurrently, it orchestrates the deletion of the PVC, which allows the Redpanda broker that was previously running on the deleted worker node to be rescheduled onto new, operational nodes. - -[WARNING] -==== -The Nodewatcher controller is an emergency backstop to keep your Redpanda cluster running in case of unexpected node failures. *Never use this controller as a routine method for removing brokers.* - -Using the Nodewatcher controller as a routine method for removing brokers can lead to unintended consequences, such as increased risk of data loss and inconsistent cluster states. The Nodewatcher is designed for emergency scenarios and not for managing the regular scaling, decommissioning, and rebalancing of brokers. - -To safely scale in your Redpanda cluster, always use the xref:manage:kubernetes/k-decommission-brokers.adoc[decommission process], which ensures that brokers are removed in a controlled manner, with data properly redistributed across the remaining nodes, maintaining cluster health and data integrity. -==== - -. Install the Nodewatcher controller: -+ -[tabs] -====== -Helm + Operator:: -+ --- - -You can install the Nodewatcher controller as part of the Redpanda Operator or as a sidecar on each Pod that runs a Redpanda broker. When you install the controller as part of the Redpanda Operator, the controller monitors all Redpanda clusters running in the same namespace as the Redpanda Operator. If you want the controller to manage only a single Redpanda cluster, install it as a sidecar on each Pod that runs a Redpanda broker, using the Redpanda resource. - -To install the Nodewatcher controller as part of the Redpanda Operator: - -.. Deploy the Redpanda Operator with the Nodewatcher controller: -+ -[,bash,subs="attributes+",lines=7+8] ----- -helm repo add redpanda https://charts.redpanda.com -helm repo update -helm upgrade --install redpanda-controller redpanda/operator \ - --namespace \ - --set image.tag={latest-operator-version} \ - --create-namespace \ - --set additionalCmdFlags={--additional-controllers="nodeWatcher"} \ - --set rbac.createAdditionalControllerCRs=true ----- -+ -- `--additional-controllers="nodeWatcher"`: Enables the Nodewatcher controller. -- `rbac.createAdditionalControllerCRs=true`: Creates the required RBAC rules for the Redpanda Operator to monitor the Node resources and update PVCs and PVs. - -.. Deploy a Redpanda resource: -+ -.`redpanda-cluster.yaml` -[,yaml] ----- -apiVersion: cluster.redpanda.com/v1alpha2 -kind: Redpanda -metadata: - name: redpanda -spec: - chartRef: {} - clusterSpec: {} ----- -+ -```bash -kubectl apply -f redpanda-cluster.yaml --namespace -``` - -To install the Decommission controller as a sidecar: - -.`redpanda-cluster.yaml` -[,yaml,lines=11+13+15] ----- -apiVersion: cluster.redpanda.com/v1alpha2 -kind: Redpanda -metadata: - name: redpanda -spec: - chartRef: {} - clusterSpec: - statefulset: - sideCars: - controllers: - enabled: true - run: - - "nodeWatcher" - rbac: - enabled: true ----- - -- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. -- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. -- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. - --- -Helm:: -+ --- -[tabs] -==== ---values:: -+ -.`decommission-controller.yaml` -[,yaml,lines=4+6+8] ----- -statefulset: - sideCars: - controllers: - enabled: true - run: - - "nodeWatcher" -rbac: - enabled: true ----- -+ -- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. -- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. -- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. - ---set:: -+ -[,bash,lines=4-6] ----- -helm upgrade --install redpanda redpanda/redpanda \ - --namespace \ - --create-namespace \ - --set statefulset.sideCars.controllers.enabled=true \ - --set statefulset.sideCars.controllers.run={"nodeWatcher"} \ - --set rbac.enabled=true ----- -+ -- `statefulset.sideCars.controllers.enabled`: Enables the controllers sidecar. -- `statefulset.sideCars.controllers.run`: Enables the Nodewatcher controller. -- `rbac.enabled`: Creates the required RBAC rules for the controller to monitor the Node resources and update PVCs and PVs. - -==== --- -====== - -. Test the Nodewatcher controller by deleting a Node resource: -+ -[,bash] ----- -kubectl delete node ----- -+ -NOTE: This step is for testing purposes only. - -. Monitor the logs of the Nodewatcher controller: -+ --- -- If you're running the Nodewatcher controller as part of the Redpanda Operator: -+ -[,bash] ----- -kubectl logs -l app.kubernetes.io/name=operator -c manager --namespace ----- - -- If you're running the Nodewatcher controller as a sidecar: -+ -[,bash] ----- -kubectl logs --namespace -c redpanda-controllers ----- --- -+ -You should see that the controller successfully deleted the PVC of the Pod that was running on the deleted Node resource. -+ -[,bash] ----- -kubectl get persistentvolumeclaim --namespace ----- - -. Verify that the reclaim policy of the PV is set to `Retain` to allow you to recover the node, if necessary: -+ -[,bash] ----- -kubectl get persistentvolume --namespace ----- - -After the Nodewatcher controller has finished, xref:manage:kubernetes/k-decommission-brokers.adoc[decommission the broker] that was removed from the node. This is necessary to prevent a potential loss of quorum and ensure cluster stability. - -NOTE: Make sure to use the `--force` flag when decommissioning the broker with xref:reference:rpk/rpk-redpanda/rpk-redpanda-admin-brokers-decommission.adoc[`rpk redpanda admin brokers decommission`]. This flag is required when the broker is no longer running. +To scale in a Redpanda cluster, follow the xref:manage:kubernetes/k-decommission-brokers.adoc[instructions for decommissioning brokers in Kubernetes] to safely remove brokers from the Redpanda cluster. diff --git a/package-lock.json b/package-lock.json index 5676f9464..559466d6a 100644 --- a/package-lock.json +++ b/package-lock.json @@ -12,7 +12,8 @@ "@antora/cli": "3.1.2", "@antora/site-generator": "3.1.2", "@asciidoctor/tabs": "^1.0.0-beta.5", - "@redpanda-data/docs-extensions-and-macros": "^3.0.0" + "@redpanda-data/docs-extensions-and-macros": "^3.0.0", + "@sntke/antora-mermaid-extension": "^0.0.6" }, "devDependencies": { "@octokit/core": "^6.1.2", @@ -2890,6 +2891,12 @@ "lilconfig": ">=2" } }, + "node_modules/@sntke/antora-mermaid-extension": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/@sntke/antora-mermaid-extension/-/antora-mermaid-extension-0.0.6.tgz", + "integrity": "sha512-c4L+JsJYQYq/R73h5yRdBBR1jVkVdhIm6yhRy1Y009IpvvYAQor3TIxwaFXnPNR2NyfSlXUpXHelkEHddmJMOw==", + "license": "MIT" + }, "node_modules/@szmarczak/http-timer": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/@szmarczak/http-timer/-/http-timer-5.0.1.tgz", diff --git a/package.json b/package.json index bc5c23a53..c747eebdb 100644 --- a/package.json +++ b/package.json @@ -15,12 +15,13 @@ "@antora/cli": "3.1.2", "@antora/site-generator": "3.1.2", "@asciidoctor/tabs": "^1.0.0-beta.5", - "@redpanda-data/docs-extensions-and-macros": "^3.0.0" + "@redpanda-data/docs-extensions-and-macros": "^3.0.0", + "@sntke/antora-mermaid-extension": "^0.0.6" }, "devDependencies": { - "@octokit/rest": "^21.0.1", "@octokit/core": "^6.1.2", "@octokit/plugin-retry": "^7.1.1", + "@octokit/rest": "^21.0.1", "@web/dev-server": "^0.2.1", "cross-env": "^7.0.3", "doc-detective": "^2.17.0",