From b68366dfd0f8958866b7543697492cd9a345913e Mon Sep 17 00:00:00 2001 From: Aleksandr Zimin Date: Fri, 31 Jan 2025 00:19:35 +0300 Subject: [PATCH] [docs] Configuration examples (#102) Signed-off-by: Aleksandr Zimin Signed-off-by: Max Chervov Signed-off-by: Denis.Rebenok Signed-off-by: Artem Kladov Co-authored-by: Max Chervov Co-authored-by: Denis.Rebenok Co-authored-by: Artem Kladov Co-authored-by: Rinat Mukaev --- ...ONFIGURATION_RU.md => CONFIGURATION.ru.md} | 0 docs/{CR_RU.md => CR.ru.md} | 0 docs/FAQ.md | 112 ++- docs/{FAQ_RU.md => FAQ.ru.md} | 66 +- docs/LAYOUTS.md | 753 ++++++++++++++++++ docs/LAYOUTS.ru.md | 746 +++++++++++++++++ docs/{README_RU.md => README.ru.md} | 0 docs/{USAGE_RU.md => USAGE.ru.md} | 0 .../sds-node-configurator-scenaries.png | Bin 0 -> 283683 bytes .../sds-node-configurator-scenaries.puml | 51 ++ .../sds-node-configurator-scenaries.ru.png | Bin 0 -> 141775 bytes .../sds-node-configurator-scenaries.ru.puml | 51 ++ 12 files changed, 1754 insertions(+), 25 deletions(-) rename docs/{CONFIGURATION_RU.md => CONFIGURATION.ru.md} (100%) rename docs/{CR_RU.md => CR.ru.md} (100%) rename docs/{FAQ_RU.md => FAQ.ru.md} (77%) create mode 100644 docs/LAYOUTS.md create mode 100644 docs/LAYOUTS.ru.md rename docs/{README_RU.md => README.ru.md} (100%) rename docs/{USAGE_RU.md => USAGE.ru.md} (100%) create mode 100644 docs/images/sds-node-configurator-scenaries.png create mode 100644 docs/images/sds-node-configurator-scenaries.puml create mode 100644 docs/images/sds-node-configurator-scenaries.ru.png create mode 100644 docs/images/sds-node-configurator-scenaries.ru.puml diff --git a/docs/CONFIGURATION_RU.md b/docs/CONFIGURATION.ru.md similarity index 100% rename from docs/CONFIGURATION_RU.md rename to docs/CONFIGURATION.ru.md diff --git a/docs/CR_RU.md b/docs/CR.ru.md similarity index 100% rename from docs/CR_RU.md rename to docs/CR.ru.md diff --git a/docs/FAQ.md b/docs/FAQ.md index e249a16d..fac83abd 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -9,52 +9,56 @@ The module is guaranteed to work only with stock kernels that are shipped with t The module may work with other kernels or distributions, but its stable operation and availability of all features is not guaranteed. {{< /alert >}} -## Why does creating `BlockDevice` and `LVMVolumeGroup` resources in a cluster fail? +## Why does creating BlockDevice and LVMVolumeGroup resources in a cluster fail? -* In most cases, the creation of `BlockDevice` resources fails because the existing devices fail filtering by the controller. Please make sure that your devices meet the [requirements](./usage.html#the-conditions-the-controller-imposes-on-the-device). +- In most cases, the creation of BlockDevice resources fails because the existing devices fail filtering by the controller. Make sure that your devices meet the [requirements](./usage.html#the-conditions-the-controller-imposes-on-the-device). -* Creating LVMVolumeGroup resources may fail due to the absence of BlockDevice resources in the cluster, as their names are used in the LVMVolumeGroup specification. +- Creating LVMVolumeGroup resources may fail due to the absence of BlockDevice resources in the cluster, as their names are used in the LVMVolumeGroup specification. -* If the `BlockDevice` resources are present and the `LVMVolumeGroup` resources are not present, please make sure the existing `LVM Volume Group` on the node has a special tag `storage.deckhouse.io/enabled=true` attached. +- If the BlockDevice resources are present and the LVMVolumeGroup resources are not, make sure the existing `LVM Volume Group` on the node has the special tag `storage.deckhouse.io/enabled=true` attached. -## I have deleted the `LVMVolumeGroup` resource, but the resource and its `Volume Group` are still there. What do I do? +## I have deleted the LVMVolumeGroup resource, but the resource and its `Volume Group` are still there. What do I do? Such a situation is possible in two cases: 1. The `Volume Group` contains `LV`. -The controller does not take responsibility for removing LV from the node, so if there are any logical volumes in the `Volume Group` created by the resource, you need to manually delete them on the node. After this, both the resource and the `Volume Group` (along with the `PV`) will be deleted automatically. + + The controller does not take responsibility for removing LV from the node, so if there are any logical volumes in the `Volume Group` created by the resource, you need to manually delete them on the node. After this, both the resource and the `Volume Group` (along with the `PV`) will be deleted automatically. 2. The resource has an annotation `storage.deckhouse.io/deletion-protection`. -This annotation protects the resource from deletion and, as a result, the `Volume Group` created by it. You need to remove the annotation manually with the command: -```shell -kubectl annotate lvg %lvg-name% storage.deckhouse.io/deletion-protection- -``` -After the command's execution, both the `LVMVolumeGroup` resource and `Volume Group` will be deleted automatically. + This annotation protects the resource from deletion and, as a result, the `Volume Group` created by it. You need to remove the annotation manually with the command: + + ```shell + kubectl annotate lvg %lvg-name% storage.deckhouse.io/deletion-protection- + ``` + + After the command is executed, both the LVMVolumeGroup resource and `Volume Group` will be deleted automatically. -## I'm trying to create a `Volume Group` using the `LVMVolumeGroup` resource, but I'm not getting anywhere. Why? +## I'm trying to create a `Volume Group` using the LVMVolumeGroup resource, but I'm not getting anywhere. Why? Most likely, your resource fails controller validation even if it has passed the Kubernetes validation successfully. -The exact cause of the failure can be found in the `status.message` field of the resource itself. +The exact cause of the failure can be found in the `status.message` field of the resource. You can also refer to the controller's logs. -The problem usually stems from incorrectly defined `BlockDevice` resources. Please make sure these resources meet the following requirements: +The problem usually stems from incorrectly-defined BlockDevice resources. Make sure these resources meet the following requirements: + - The `Consumable` field is set to `true`. -- For a `Volume Group` of the `Local` type, the specified `BlockDevice` belong to the same node. -- The current names of the `BlockDevice` resources are specified. +- For a `Volume Group` of the `Local` type, the specified BlockDevice resources belong to the same node. +- The current names of the BlockDevice resources are specified. -The full list of expected values can be found in the [CR reference](./cr.html) of the `LVMVolumeGroup` resource. +A full list of expected values can be found in the [CR reference](./cr.html) of the LVMVolumeGroup resource. -## What happens if I unplug one of the devices in a `Volume Group`? Will the linked `LVMVolumeGroup` resource be deleted? +## What happens if I unplug one of the devices in a `Volume Group`? Will the linked LVMVolumeGroup resource be deleted? -The `LVMVolumeGroup` resource will persist as long as the corresponding `Volume Group` exists. As long as at least one device exists, the `Volume Group` will be there, albeit in an unhealthy state. +The LVMVolumeGroup resource will persist as long as the corresponding `Volume Group` exists. As long as at least one device exists, the `Volume Group` will be there, albeit in an unhealthy state. Note that these issues will be reflected in the resource's `status`. -Once the unplugged device is plugged back in and reactivated, the `LVM Volume Group` will regain its functionality while the corresponding `LVMVolumeGroup` resource will also be updated to reflect the current state. +Once the unplugged device is plugged back in and reactivated, the `LVM Volume Group` will regain its functionality while the corresponding LVMVolumeGroup resource will also be updated to reflect the current state. ## How to transfer control of an existing `LVM Volume Group` on the node to the controller? -Simply add the LVM tag `storage.deckhouse.io/enabled=true` to the LVM Volume Group on the node: +Add the LVM tag `storage.deckhouse.io/enabled=true` to the LVM Volume Group on the node: ```shell vgchange myvg-0 --addtag storage.deckhouse.io/enabled=true @@ -68,14 +72,74 @@ Delete the `storage.deckhouse.io/enabled=true` LVM tag for the target `Volume Gr vgchange myvg-0 --deltag storage.deckhouse.io/enabled=true ``` -The controller will then stop tracking the selected `Volume Group` and delete the associated `LVMVolumeGroup` resource automatically. +The controller will then stop tracking the selected `Volume Group` and delete the associated LVMVolumeGroup resource automatically. ## I haven't added the `storage.deckhouse.io/enabled=true` LVM tag to the `Volume Group`, but it is there. How is this possible? -This can happen if you created the `LVM Volume Group` using the `LVMVolumeGroup` resource, in which case the controller will automatically add this LVM tag to the created `LVM Volume Group`. This is also possible if the `Volume Group` or its `Thin-pool` already had the `linstor-*` LVM tag of the `linstor` module. +This can happen if you created the `LVM Volume Group` using the LVMVolumeGroup resource, in which case the controller will automatically add this LVM tag to the created `LVM Volume Group`. This is also possible if the `Volume Group` or its `Thin-pool` already had the `linstor-*` LVM tag of the `linstor` module. When you switch from the `linstor` module to the `sds-node-configurator` and `sds-drbd` modules, the `linstor-*` LVM tags are automatically replaced with the `storage.deckhouse.io/enabled=true` LVM tag in the `Volume Group`. This way, the `sds-node-configurator` gains control over these `Volume Groups`. +## How to use the LVMVolumeGroupSet resource to create LVMVolumeGroup? + +To create an LVMVolumeGroup using the LVMVolumeGroupSet resource, you need to specify node selectors and a template for the LVMVolumeGroup resources in the LVMVolumeGroupSet specification. Currently, only the `PerNode` strategy is supported. With this strategy, the controller will create one LVMVolumeGroup resource from the template for each node that matches the selector. + +Example of an LVMVolumeGroupSet specification: + +```yaml +apiVersion: storage.deckhouse.io/v1alpha1 +kind: LVMVolumeGroupSet +metadata: + name: my-lvm-volume-group-set + labels: + my-label: my-value +spec: + strategy: PerNode + nodeSelector: + matchLabels: + node-role.kubernetes.io/worker: "" + lvmVolumeGroupTemplate: + metadata: + labels: + my-label-for-lvg: my-value-for-lvg + spec: + type: Local + blockDeviceSelector: + matchLabels: + status.blockdevice.storage.deckhouse.io/model: + actualVGNameOnTheNode: +``` + +## How to use the LVMVolumeGroupSet resource to create LVMVolumeGroup? + +To create an LVMVolumeGroup using the LVMVolumeGroupSet resource, you need to specify node selectors and a template for the LVMVolumeGroup resources in the LVMVolumeGroupSet specification. Currently, only the `PerNode` strategy is supported. With this strategy, the controller will create one LVMVolumeGroup resource from the template for each node that matches the selector. + +Example of an LVMVolumeGroupSet specification: + +```yaml +apiVersion: storage.deckhouse.io/v1alpha1 +kind: LVMVolumeGroupSet +metadata: + name: my-lvm-volume-group-set + labels: + my-label: my-value +spec: + strategy: PerNode + nodeSelector: + matchLabels: + node-role.kubernetes.io/worker: "" + lvmVolumeGroupTemplate: + metadata: + labels: + my-label-for-lvg: my-value-for-lvg + spec: + type: Local + blockDeviceSelector: + matchLabels: + status.blockdevice.storage.deckhouse.io/model: + actualVGNameOnTheNode: +``` + ## Which labels are added by the controller to BlockDevice resources * status.blockdevice.storage.deckhouse.io/type - LVM type @@ -104,4 +168,4 @@ When you switch from the `linstor` module to the `sds-node-configurator` and `sd * status.blockdevice.storage.deckhouse.io/hotplug - hot-plug capability -* status.blockdevice.storage.deckhouse.io/machineid - ID of the server on which the block device is installed \ No newline at end of file +* status.blockdevice.storage.deckhouse.io/machineid - ID of the server on which the block device is installed diff --git a/docs/FAQ_RU.md b/docs/FAQ.ru.md similarity index 77% rename from docs/FAQ_RU.md rename to docs/FAQ.ru.md index 3873220e..a81dc187 100644 --- a/docs/FAQ_RU.md +++ b/docs/FAQ.ru.md @@ -77,6 +77,70 @@ vgchange myvg-0 --deltag storage.deckhouse.io/enabled=true При миграции с встроенного модуля `linstor` на модули `sds-node-configurator` и `sds-drbd` автоматически происходит изменение LVM-тегов `linstor-*` на LVM-тег `storage.deckhouse.io/enabled=true` в `Volume Group`. Таким образом, управление этими `Volume Group` передается модулю `sds-node-configurator`. +## Как использовать ресурс `LVMVolumeGroupSet` для создания `LVMVolumeGroup`? + +Для создания `LVMVolumeGroup` с помощью `LVMVolumeGroupSet` необходимо указать в спецификации `LVMVolumeGroupSet` селекторы для узлов и шаблон для создаваемых ресурсов `LVMVolumeGroup`. На данный момент поддерживается только стратегия `PerNode`, при которой контроллер создаст по одному ресурсу `LVMVolumeGroup` из шаблона для каждого узла, удовлетворяющего селектору. + +Пример спецификации `LVMVolumeGroupSet`: + +```yaml +apiVersion: storage.deckhouse.io/v1alpha1 +kind: LVMVolumeGroupSet +metadata: + name: my-lvm-volume-group-set + labels: + my-label: my-value +spec: + strategy: PerNode + nodeSelector: + matchLabels: + node-role.kubernetes.io/worker: "" + lvmVolumeGroupTemplate: + metadata: + labels: + my-label-for-lvg: my-value-for-lvg + spec: + type: Local + blockDeviceSelector: + matchLabels: + status.blockdevice.storage.deckhouse.io/model: + actualVGNameOnTheNode: + + +``` + +## Как использовать ресурс `LVMVolumeGroupSet` для создания `LVMVolumeGroup`? + +Для создания `LVMVolumeGroup` с помощью `LVMVolumeGroupSet` необходимо указать в спецификации `LVMVolumeGroupSet` селекторы для узлов и шаблон для создаваемых ресурсов `LVMVolumeGroup`. На данный момент поддерживается только стратегия `PerNode`, при которой контроллер создаст по одному ресурсу `LVMVolumeGroup` из шаблона для каждого узла, удовлетворяющего селектору. + +Пример спецификации `LVMVolumeGroupSet`: + +```yaml +apiVersion: storage.deckhouse.io/v1alpha1 +kind: LVMVolumeGroupSet +metadata: + name: my-lvm-volume-group-set + labels: + my-label: my-value +spec: + strategy: PerNode + nodeSelector: + matchLabels: + node-role.kubernetes.io/worker: "" + lvmVolumeGroupTemplate: + metadata: + labels: + my-label-for-lvg: my-value-for-lvg + spec: + type: Local + blockDeviceSelector: + matchLabels: + status.blockdevice.storage.deckhouse.io/model: + actualVGNameOnTheNode: + + +``` + ## Какие лейблы добавляются контроллером на ресурсы BlockDevices * status.blockdevice.storage.deckhouse.io/type - тип LVM @@ -105,4 +169,4 @@ vgchange myvg-0 --deltag storage.deckhouse.io/enabled=true * status.blockdevice.storage.deckhouse.io/hotplug - возможность hot подключения -* status.blockdevice.storage.deckhouse.io/machineid - ID сервера, на котором установлено блочное устройство \ No newline at end of file +* status.blockdevice.storage.deckhouse.io/machineid - ID сервера, на котором установлено блочное устройство diff --git a/docs/LAYOUTS.md b/docs/LAYOUTS.md new file mode 100644 index 00000000..c42d0184 --- /dev/null +++ b/docs/LAYOUTS.md @@ -0,0 +1,753 @@ +--- +title: "Module sds-node-configurator: sds-module configuration scenarios" +linkTitle: "Configuration scenarios" +description: "Sds-module configuration scenarios using sds-node-configurator" +--- + +{{< alert level="warning" >}} +The module's functionality is guaranteed only when using stock kernels provided with [supported distributions](https://deckhouse.io/documentation/v1/supported_versions.html#linux). + +The module may work with other kernels or distributions, but this is not guaranteed. +{{< /alert >}} + +{{< alert level="info" >}} +If you create virtual machines by cloning, you must change the UUID of the volume groups (VG) on the cloned VMs. To do this, run the `vgchange -u` command. This will generate new UUID for all VG on the virtual machine. You can add this to the `cloud-init` script if needed. + +You can only change the UUID of a VG if it has no active logical volumes (LV). To deactivate a logical volume, unmount it and run the following command: + +```shell +lvchange -an +``` + +, where `` — the name of a VG, to deactivate all LV in the VG, or the name of a LV, to deactivate a specific LV. +{{< /alert >}} + +## Configuration methods and scenarios for node disk subsystems + +On this page, you can find two methods for configuring the disk subsystem on Kubernetes cluster nodes, +depending on storage conditions: + +- [Storage with identical disks](#storage-with-identical-disks) +- [Combined storage](#combined-storage) + +Each configuration method comes with two configuration scenarios: + +- "Full mirror". We recommend using this scenario due to its simplicity and reliability. +- "Partial mirror". + +The following table contains details, advantages, and disadvantages of each scenario: + +| Configuration scenario | Details | Advantages | Disadvantages | +| ---------------------- |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ---------- |-----------------------------------------------------------------------------------------------------------------------------| +| "Full mirror" |
  • Disks aren't partitioned. A mirror is made of entire disks
  • A single VG is used for the root system and data
|
  • Reliable
  • Easy to configure and use
  • Convenient for allocating space between different software-defined storages (SDS)
|
  • Overhead disk space for SDS, which replicate data on their own
| +| "Partial mirror" |
  • Disks are divided in two partitions
  • The first partition on each disk is used to create a mirror. It stores a VG where the operating system (OS) is installed
  • The second partition is used as a VG for data, without mirroring
|
  • Reliable
  • The most efficient disk space use
|
  • Difficult to configure and use
  • Very difficult to reallocate space between safe and unsafe partitions
| + +The following diagram depicts the differences in disk subsystem configuration depending on the selected scenario: + +![Configuration scenarios compared](images/sds-node-configurator-scenaries.png) + +## Storage with identical disks + +In this scenario, you will be using a single-type disks on a node. + +### Full mirror + +We recommend using this configuration scenario due to its simplicity and reliability. + +To configure a node according to this scenario, do the following: + +1. Assemble a mirror of the entire disks (hardware or software). + This mirror will be used for both the root system and data. +2. When installing the OS: + - Create a VG named `main` on the mirror. + - Create an LV named `root` in the `main` VG. + - Install the OS on the `root` LV. +3. Add the `storage.deckhouse.io/enabled=true` tag to the `main` VG using the following command: + + ```shell + vgchange main --addtag storage.deckhouse.io/enabled=true + ``` + +4. Add the prepared node to the Deckhouse cluster. + + If the node matches the `nodeSelector` specified in `spec.nodeSelector` of the `sds-replicated-volume` + or `sds-local-volume` modules, the `sds-node-configurator` module agent will start on that node. + It will detect the `main` VG and add a corresponding `LVMVolumeGroup` resource to the Deckhouse cluster. + The LVMVolumeGroup resource can then be used to create volumes in the `sds-replicated-volume` or `sds-local-volume` modules. + +#### Example of SDS module configuration (identical disks, "Full mirror") + +In this example, it's assumed that you have configured three nodes following the "Full mirror" scenario. +In this case, the Deckhouse cluster will have three LVMVolumeGroup resources with randomly generated names. +In the future, it will be possible to specify a name for the LVMVolumeGroup resources +created during automatic VG discovery by adding the `LVM` tag with the desired resource name. + +To list the LVMVolumeGroup resources, run the following command: + +```shell +kubectl get lvmvolumegroups.storage.deckhouse.io +``` + +In the output, you should see the following list: + +```console +NAME THINPOOLS CONFIGURATION APPLIED PHASE NODE SIZE ALLOCATED SIZE VG AGE +vg-08d3730c-9201-428d-966c-45795cba55a6 0/0 True Ready worker-2 25596Mi 0 main 61s +vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d 0/0 True Ready worker-0 25596Mi 0 main 4m17s +vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 0/0 True Ready worker-1 25596Mi 0 main 108s +``` + +##### Configuring the `sds-local-volume` module (identical disks, "Full mirror") + +To configure the `sds-local-volume` module following the "Full mirror" scenario, +create a LocalStorageClass resource and include all your LVMVolumeGroup resources +to use the `main` VG on all your nodes in the `sds-local-volume` module: + +```yaml +kubectl apply -f -<}} +Using partitions with the same PARTUUID is not supported, as well as changing the PARTUUID of a partition used for creating a VG. When creating a partition table, we recommended that you choose the `GPT` format, as PARTUUID in `MBR` are pseudo-random and contain the partition number. Additionally, `MBR` does not support PARTLABEL, which can be helpful to identify a partition in Deckhouse later. +{{< /alert >}} + +In this scenario, two partitions on each disk are used: +one for the root system and SDS data storage that is not replicated, +and another for SDS data that is replicated. +The first partition of each disk is used to create a mirror, +and the second is used to create a separate VG without mirroring. +This approach maximizes the efficient use of disk space. + +To configure a node according to this scenario, do the following: + +1. When installing the OS: + - Create two partitions on each disk. + - Create a mirror from the first partitions on each disk. + - Create a VG named `main-safe` on the mirror. + - Create an LV named `root` in the `main-safe` VG. + - Install the OS on the `root` LV. +2. Add the `storage.deckhouse.io/enabled=true` tag to the `main-safe` VG using the following command: + + ```shell + vgchange main-safe --addtag storage.deckhouse.io/enabled=true + ``` + +3. Create a VG named `main-unsafe` from the second partitions of each disk. +4. Add the `storage.deckhouse.io/enabled=true` tag to the `main-unsafe` VG using the following command: + + ```shell + vgchange main-unsafe --addtag storage.deckhouse.io/enabled=true + ``` + +5. Add the prepared node to the Deckhouse cluster. + + If the node matches the `nodeSelector` specified in `spec.nodeSelector` of the `sds-replicated-volume` or `sds-local-volume` modules, the `sds-node-configurator` module agent will start on that node. It will detect the `main-safe` and `main-unsafe` VG and add a corresponding LVMVolumeGroup resources to the Deckhouse cluster. These LVMVolumeGroup resources can then be used to create volumes in the `sds-replicated-volume` or `sds-local-volume` modules. + +#### Example of SDS module configuration (identical disks, "Partial mirror") + +In this example, it's assumed that you have configured three nodes following the "Partial mirror" scenario. +In this case, the Deckhouse cluster will have six LVMVolumeGroup resources with randomly generated names. +In the future, it will be possible to specify a name for the LVMVolumeGroup resources created during automatic VG discovery +by adding the `LVM` tag with the desired resource name. + +To list the LVMVolumeGroup resources, run the following command: + +```shell +kubectl get lvmvolumegroups.storage.deckhouse.io +``` + +In the output, you should see the following list: + +```console +NAME THINPOOLS CONFIGURATION APPLIED PHASE NODE SIZE ALLOCATED SIZE VG AGE +vg-08d3730c-9201-428d-966c-45795cba55a6 0/0 True Ready worker-2 25596Mi 0 main-safe 61s +vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d 0/0 True Ready worker-0 25596Mi 0 main-safe 4m17s +vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 0/0 True Ready worker-1 25596Mi 0 main-safe 108s +vg-deccf08a-44d4-45f2-aea9-6232c0eeef91 0/0 True Ready worker-2 25596Mi 0 main-unsafe 61s +vg-e0f00cab-03b3-49cf-a2f6-595628a2593c 0/0 True Ready worker-0 25596Mi 0 main-unsafe 4m17s +vg-fe679d22-2bc7-409c-85a9-9f0ee29a6ca2 0/0 True Ready worker-1 25596Mi 0 main-unsafe 108s +``` + +##### Configuring the `sds-local-volume` module (identical disks, "Partial mirror") + +To configure the `sds-local-volume` module following the "Partial mirror" scenario, +create a LocalStorageClass resource and include the LVMVolumeGroup resources +to use only the `main-safe` VG on all your nodes in the `sds-local-volume` module: + +```yaml +kubectl apply -f -<}} +The following procedure describes configuration of additional disks for initial cluster deployment and configuration +when you connect to nodes using SSH. +If you have an already running cluster and you need to connect additional disks to its nodes, +we recommend that you create and configure a VG using the [LVMVolumeGroup resource](./usage.html#creating-an-lvmvolumegroup-resource), +instead of running the commands below. +{{< /alert >}} + +To configure additional disks on a node according to the "Full mirror" scenario, do the following: + +1. Create a mirror from all additional disks of a single type (hardware or software). +2. Create a VG named `vg-name` on the mirror. +3. Assign the `storage.deckhouse.io/enabled=true` tag for `vg-name` VG using the following command: + + ```shell + vgchange --addtag storage.deckhouse.io/enabled=true + ``` + +{{< alert level="info" >}} +In the example command, replace `` with a corresponding VG name, depending on the type of additional disks. + +Example of VG names for various disk types: + +- `ssd-nvme`: For NVMe SSD. +- `ssd-sata`: For SATA SSD. +- `hdd`: For HDD. +{{< /alert >}} + +#### Example of SDS module configuration (combined storage, "Full mirror") + +In this example, it's assumed that you have configured three nodes following the "Full mirror" scenario. +In this case, the Deckhouse cluster will have three LVMVolumeGroup resources with randomly generated names. +In the future, it will be possible to specify a name for the LVMVolumeGroup resources +created during automatic VG discovery by adding the `LVM` tag with the desired resource name. + +To list the LVMVolumeGroup resources, run the following command: + +```shell +kubectl get lvmvolumegroups.storage.deckhouse.io +``` + +In the output, you should see the following list: + +```console +NAME THINPOOLS CONFIGURATION APPLIED PHASE NODE SIZE ALLOCATED SIZE VG AGE +vg-08d3730c-9201-428d-966c-45795cba55a6 0/0 True Ready worker-2 25596Mi 0 61s +vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d 0/0 True Ready worker-0 25596Mi 0 4m17s +vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 0/0 True Ready worker-1 25596Mi 0 108s +``` + +Where `` is the name you assigned previously. + +##### Configuring the `sds-local-volume` module (combined storage, "Full mirror") + +To configure the `sds-local-volume` module following the "Full mirror" scenario, +create a LocalStorageClass resource and include all LVMVolumeGroup resources +to use the `` VG on all nodes in the `sds-local-volume` module: + +```yaml +kubectl apply -f -< +spec: + lvm: + lvmVolumeGroups: + - name: vg-08d3730c-9201-428d-966c-45795cba55a6 + - name: vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d + - name: vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 + type: Thick + reclaimPolicy: Delete + volumeBindingMode: WaitForFirstConsumer +EOF +``` + +{{< alert level="info" >}} +In the example configuration, replace `` with a corresponding name, +depending on the type of additional disks. + +Examples of the LocalStorageClass resource names for additional disks of various types: + +- `local-sc-ssd-nvme`: For NVMe SSD. +- `local-sc-ssd-sata`: For SATA SSD. +- `local-sc-ssd-hdd`: For HDD. +{{< /alert >}} + +##### Configuring the `sds-replicated-volume` module (combined storage, "Full mirror") + +To configure the `sds-replicated-volume` module according to the "Full mirror" scenario, do the following: + +1. Create a ReplicatedStoragePool resource and include all LVMVolumeGroup resources + to use the `` VG on all nodes in the `sds-replicated-volume` module: + + ```yaml + kubectl apply -f -< + spec: + type: LVM + lvmVolumeGroups: + - name: vg-08d3730c-9201-428d-966c-45795cba55a6 + - name: vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d + - name: vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 + EOF + ``` + + > In the example configuration, replace `` with a corresponding name, + > depending on the type of additional disks. + > + > Examples of the ReplicatedStoragePool resource names for additional disks of various types: + > + > - `data-ssd-nvme`: For NVMe SSD. + > - `data-ssd-sata`: For SATA SSD. + > - `data-hdd`: For HDD. + +2. Create a ReplicatedStorageClass resource + and specify a name of the previously created ReplicatedStoragePool resource in the `storagePool` field: + + ```yaml + kubectl apply -f -< + replication: None + reclaimPolicy: Delete + topology: Ignored # When specifying this topology, ensure the cluster has no zones (nodes labeled with `topology.kubernetes.io/zone`). + --- + apiVersion: storage.deckhouse.io/v1alpha1 + kind: ReplicatedStorageClass + metadata: + name: replicated-sc-ssd-nvme-r2 + spec: + storagePool: + replication: Availability + reclaimPolicy: Delete + topology: Ignored + --- + apiVersion: storage.deckhouse.io/v1alpha1 + kind: ReplicatedStorageClass + metadata: + name: replicated-sc-ssd-nvme-r3 + spec: + storagePool: + replication: ConsistencyAndAvailability + reclaimPolicy: Delete + topology: Ignored + EOF + ``` + +### Configuring additional disks ("Partial mirror") + +{{< alert level="warning" >}} +Using partitions with the same PARTUUID is not supported, as well as changing the PARTUUID of a partition used for creating a VG. When creating a partition table, we recommended that you choose the `GPT` format, as PARTUUID in `MBR` are pseudo-random and contain the partition number. Additionally, `MBR` does not support PARTLABEL, which can be helpful to identify a partition in Deckhouse later. +{{< /alert >}} + +{{< alert level="warning" >}} +The following procedure describes configuration of additional disks for initial cluster deployment and configuration +when you connect to nodes using SSH. +If you have an already running cluster and you need to connect additional disks to its nodes, +we recommend that you create and configure a VG using the [LVMVolumeGroup resource](./usage.html#creating-an-lvmvolumegroup-resource), +instead of running the commands below. +{{< /alert >}} + +In the "Partial mirror" scenario, you will be using two partitions on each disk: +one to store non-replicable SDS data +and the other one to store replicable SDS data. +The first partition of each disk is used to create a mirror, +while the second partition is used to create a separate VG without mirroring. +This approach maximizes the efficient use of disk space. + +To configure a node with additional disks according to the "Partial mirror" scenario, do the following: + +1. Create two partitions on each additional disk. +2. Create a mirror from the first partitions on each disk. +3. Create a VG named `-safe` on the mirror. +4. Create a VG named `-unsafe` from the second partitions on each disk. +5. Assign the `storage.deckhouse.io/enabled=true` tag for the `ssd-nvme-safe` and `ssd-nvme-unsafe` VG using the following commands: + + ```shell + vgchange ssd-nvme-safe --addtag storage.deckhouse.io/enabled=true + vgchange ssd-nvme-unsafe --addtag storage.deckhouse.io/enabled=true + ``` + + > In the example commands, replace `` with a corresponding VG name, depending on the type of additional disks. + > + > Example of VG names for various disk types: + > + > - `ssd-nvme`: For NVMe SSD. + > - `ssd-sata`: For SATA SSD. + > - `hdd`: For HDD. + +#### Example of SDS module configuration (combined storage, "Partial mirror") + +In this example, it's assumed that you have configured three nodes following the "Partial mirror" scenario. +In this case, the Deckhouse cluster will have six LVMVolumeGroup resources with randomly generated names. +In the future, it will be possible to specify a name for the LVMVolumeGroup resources +created during automatic VG discovery by adding the `LVM` tag with the desired resource name. + +To list the LVMVolumeGroup resources, run the following command: + +```shell +kubectl get lvmvolumegroups.storage.deckhouse.io +``` + +In the output, you should see the following list: + +```console +NAME THINPOOLS CONFIGURATION APPLIED PHASE NODE SIZE ALLOCATED SIZE VG AGE +vg-08d3730c-9201-428d-966c-45795cba55a6 0/0 True Ready worker-2 25596Mi 0 -safe 61s +vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d 0/0 True Ready worker-0 25596Mi 0 -safe 4m17s +vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 0/0 True Ready worker-1 25596Mi 0 -safe 108s +vg-deccf08a-44d4-45f2-aea9-6232c0eeef91 0/0 True Ready worker-2 25596Mi 0 -unsafe 61s +vg-e0f00cab-03b3-49cf-a2f6-595628a2593c 0/0 True Ready worker-0 25596Mi 0 -unsafe 4m17s +vg-fe679d22-2bc7-409c-85a9-9f0ee29a6ca2 0/0 True Ready worker-1 25596Mi 0 -unsafe 108s +``` + +Where `` is the name you assigned previously. + +##### Configuring the `sds-local-volume` module (combined storage, "Partial mirror") + +To configure the `sds-local-volume` module following the "Partial mirror" scenario, +create a LocalStorageClass resource and include LVMVolumeGroup resources +to use only the `-safe` VG on all nodes in the `sds-local-volume` module: + +```yaml +kubectl apply -f -< +spec: + lvm: + lvmVolumeGroups: + - name: vg-08d3730c-9201-428d-966c-45795cba55a6 + - name: vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d + - name: vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 + type: Thick + reclaimPolicy: Delete + volumeBindingMode: WaitForFirstConsumer +EOF +``` + +{{< alert level="info" >}} +In the example configuration, replace `` with a corresponding name, +depending on the type of additional disks. + +Examples of the LocalStorageClass resource names for additional disks of various types: + +- `local-sc-ssd-nvme`: For NVMe SSD. +- `local-sc-ssd-sata`: For SATA SSD. +- `local-sc-ssd-hdd`: For HDD. +{{< /alert >}} + +##### Configuring the `sds-replicated-volume` module (combined storage, "Partial mirror") + +To configure the `sds-replicated-volume` module according to the "Partial mirror" scenario, do the following: + +1. Create a ReplicatedStoragePool resource named `data--safe` and include LVMVolumeGroup resources + for using only the `-safe` VG on all nodes in the `sds-replicated-volume` module in ReplicatedStorageClass + with the `replication: None` parameter: + + ```yaml + kubectl apply -f -<-safe + spec: + type: LVM + lvmVolumeGroups: + - name: vg-08d3730c-9201-428d-966c-45795cba55a6 + - name: vg-b59ff9e1-6ef2-4761-b5d2-6172926d4f4d + - name: vg-c7863e12-c143-42bb-8e33-d578ce50d6c7 + EOF + ``` + + > In the example configuration, replace `data--safe` with a corresponding VG name, + > depending on the type of additional disks. + > + > Example of the ReplicatedStoragePool resource names for additional disks of various types: + > + > - `data-ssd-nvme-safe`: For NVMe SSD. + > - `data-ssd-sata-safe`: For SATA SSD. + > - `data-hdd-safe`: For HDD. + +2. Create a ReplicatedStoragePool resource named `data--unsafe` and include LVMVolumeGroup resources + for using only the `-unsafe` VG on all nodes in the `sds-replicated-volume` module in ReplicatedStorageClass + with the `replication: Availability` or `replication: ConsistencyAndAvailability` parameter: + + ```yaml + kubectl apply -f -<-unsafe + spec: + type: LVM + lvmVolumeGroups: + - name: vg-deccf08a-44d4-45f2-aea9-6232c0eeef91 + - name: vg-e0f00cab-03b3-49cf-a2f6-595628a2593c + - name: vg-fe679d22-2bc7-409c-85a9-9f0ee29a6ca2 + EOF + ``` + + > In the example configuration, replace `data--unsafe` with a corresponding VG name, + > depending on the type of additional disks. + > + > Example of the ReplicatedStoragePool resource names for additional disks of various types: + > + > - `data-ssd-nvme-unsafe`: For NVMe SSD. + > - `data-ssd-sata-unsafe`: For SATA SSD. + > - `data-hdd-unsafe`: For HDD. + +3. Create a ReplicatedStorageClass resource and specify a name of the previously created ReplicatedStoragePool resources + for using `-safe` and `-unsafe` VG on all nodes: + + ```yaml + kubectl apply -f -<-safe # Note that you should use `data--safe` for this resource because it has `replication: None`, meaning there will be no replication of data for PV created with this StorageClass. + replication: None + reclaimPolicy: Delete + topology: Ignored # When specifying this topology, ensure the cluster has no zones (nodes labeled with `topology.kubernetes.io/zone`). + --- + apiVersion: storage.deckhouse.io/v1alpha1 + kind: ReplicatedStorageClass + metadata: + name: replicated-sc-ssd-nvme-r2 + spec: + storagePool: data--unsafe # Note that you should use `data--unsafe` for this resource because it has `replication: Availability`, meaning there will be replication of data for PV created with this StorageClass. + replication: Availability + reclaimPolicy: Delete + topology: Ignored + --- + apiVersion: storage.deckhouse.io/v1alpha1 + kind: ReplicatedStorageClass + metadata: + name: replicated-sc-ssd-nvme-r3 + spec: + storagePool: data--unsafe # Note that you should use `data--unsafe` for this resource because it has `replication: ConsistencyAndAvailability`, meaning there will be replication of data for PV created with this StorageClass. + replication: ConsistencyAndAvailability + reclaimPolicy: Delete + topology: Ignored + EOF + ``` + + > In the example configuration, replace `data--unsafe` with a corresponding VG name, + > depending on the type of additional disks. + > + > Example of the ReplicatedStoragePool resource names for additional disks of various types: + > + > - `data-ssd-nvme-unsafe`: For NVMe SSD. + > - `data-ssd-sata-unsafe`: For SATA SSD. + > - `data-hdd-unsafe`: For HDD. + > + > Replace `data--safe` with a corresponding VG name, depending on the type of additional disks. + > + > Example of the ReplicatedStoragePool resource names for additional disks of various types: + > + > - `data-ssd-nvme-safe`: For NVMe SSD. + > - `data-ssd-sata-safe`: For SATA SSD. + > - `data-hdd-safe`: For HDD. diff --git a/docs/LAYOUTS.ru.md b/docs/LAYOUTS.ru.md new file mode 100644 index 00000000..899c83dd --- /dev/null +++ b/docs/LAYOUTS.ru.md @@ -0,0 +1,746 @@ +--- +title: "Модуль sds-node-configurator: сценарии конфигурации sds-модулей" +linkTitle: "Сценарии конфигурации" +description: "Сценарии конфигурации sds-модулей с помощью sds-node-configurator" +--- + +{{< alert level="warning" >}} +Работоспособность модуля гарантируется только при использовании стоковых ядер, поставляемых вместе с [поддерживаемыми дистрибутивами](https://deckhouse.ru/documentation/v1/supported_versions.html#linux). + +Работоспособность модуля при использовании других ядер или дистрибутивов возможна, но не гарантируется. +{{< /alert >}} + +{{< alert level="info" >}} +Если вы создаёте виртуальные машины клонированием, +необходимо изменить UUID у групп томов (VG) на созданных таким образом виртуальных машинах, выполнив команду `vgchange -u`. +Данная команда сгенерирует новые UUID для всех VG на виртуальной машине. +При необходимости команду можно добавить в скрипт `cloud-init`. + +Изменить UUID у VG можно, только если в группе томов нет активных логических томов (LV). +Чтобы деактивировать логический том, отмонтируйте его и выполните следующую команду: + +```shell +lvchange -an +``` + +, где `` — название VG, для деактивации всех томов в группе, или название LV, для деактивации конкретного тома. +{{< /alert >}} + +## Способы и сценарии конфигурации дисковой подсистемы узлов + +На данной странице рассматриваются 2 способа конфигурации дисковой подсистемы на узлах кластера Kubernetes +в зависимости от условий организации хранилища: + +- [Хранилище с одинаковыми дисками](#хранилище-с-одинаковыми-дисками). +- [Комбинированное хранилище](#комбинированное-хранилище). + +Для каждого из способов конфигурации дисковой подсистемы на узлах существует два сценария конфигурации: + +- «Полное зеркало». Мы рекомендуем использовать данный сценарий конфигурации, поскольку он достаточно надёжен и прост в настройке. +- «Частичное зеркало». + +Особенности, плюсы и минусы сценариев приведены в таблице: + +| Сценарий конфигурации | Особенности реализации | Плюсы | Минусы | +|-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------| +| «Полное зеркало» |
  • Диски не делятся на разделы, делается зеркало из дисков целиком
  • Используется одна VG для корневой системы и для данных
|
  • Надежно
  • Просто в настройке и использовании
  • Удобно распределять место между разными SDS
|
  • Избыточное место на диске для программно-определяемых хранилищ (SDS), которые сами реплицируют данные
| +| «Частичное зеркало» |