Skip to content
This repository has been archived by the owner on Oct 25, 2023. It is now read-only.

Commit

Permalink
Merge branch 'docs' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
paullaffitte committed Apr 1, 2021
2 parents c2285b7 + b9dd7c8 commit 7de0e94
Show file tree
Hide file tree
Showing 3 changed files with 57 additions and 13 deletions.
34 changes: 22 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# Dothill-csi dynamic provisioner for Kubernetes

A dynamic persistent volume (PV) provisioner for Dothill AssuredSAN based storage systems.

[![Build status](https://gitlab.com/enix.io/dothill-csi/badges/main/pipeline.svg)](https://gitlab.com/enix.io/dothill-csi/-/pipelines)
[![Go Report Card](https://goreportcard.com/badge/github.com/enix/dothill-csi)](https://goreportcard.com/report/github.com/enix/dothill-csi)


## Introduction

Dealing with persistent storage on kubernetes can be particularly cumbersome, especially when dealing with on-premises installations, or when the cloud-provider persistent storage solutions are not applicable.

Entry-level SAN appliances usually propose a low-cost, still powerful, solution to enable redundant persistent storage, with the flexibility of attaching it to any host on your network.
Expand All @@ -23,6 +24,7 @@ It is also privately labeled by some of the world's most prominent storage brand
- ...

## This project

`Dothill-CSI` implements the **Container Storage Interface** in order to facilitate dynamic provisioning of persistent volumes on kubernetes cluster.

All dothill AssuredSAN based equipements share a common API which **may or may not be advertised** by the final integrator.
Expand All @@ -46,21 +48,25 @@ To a lesser extent, the following features are considered for a longer term futu

## Features

| Features / Availability | roadmap | alpha | beta | general availability |
|-------------------------|-----------|-------|-------|----------------------|
| dynamic provisioning | | | 2.3.x | |
| resize | | 2.4.x | | |
| snapshot | 3.1.x | | | |
| prometheus metrics | 3.2.x | | | |
| raw blocks | long term | | | |
| fiber channel | long term | | | |
| authentication proxy | long term | | | |
| Features / Availability | roadmap | alpha | beta | general availability |
|---------------------------|-----------|-------|-------|----------------------|
| dynamic provisioning | | | 2.3.x | |
| resize | | 2.4.x | 3.0.0 | |
| snapshot | | 3.1.x | | |
| prometheus metrics | | 3.1.x | | |
| raw blocks | long term | | | |
| iscsi chap authentication | long term | | | |
| fiber channel | long term | | | |
| authentication proxy | long term | | | |
| overview web ui | long term | | | |

## Installation

### Uninstall ISCSI tools on your node(s)

`iscsid` and `multipathd` are now shipped as sidecars on each nodes, it is therefore strongly suggested to uninstall any `open-iscsi` package.
`iscsid` and `multipathd` are now shipped as sidecars on each nodes, it is therefore strongly suggested to uninstall any `open-iscsi` and `multipath-tools` package.

The decision of shipping `iscsid` and `multipathd` as sidecars comes from the desire to simplify the developpement process, as well as improving monitoring. It's essential that versions of those softwares match the candidates versions on your hosts, more about this in the [FAQ](./docs/troubleshooting.md#multipathd-segfault-or-a-volume-got-corrupted). This setup is currently being challenged ... see [issue #88](https://github.com/enix/dothill-csi/issues/88) for more information.

### Deploy the provisioner to your kubernetes cluster

Expand Down Expand Up @@ -93,6 +99,10 @@ To make sure everything went well, there's a example pod you can deploy in the `
kubectl apply -f example/pod.yaml
```

## Documentation

You can find more documentation in the [docs](./docs) directory.

## Command-line arguments

You can have a list of all available command line flags using the `-help` switch.
Expand Down Expand Up @@ -120,4 +130,4 @@ You can run sanity checks by using the `sanity` helper script in the `test/` dir

```
./test/sanity
```
```
8 changes: 7 additions & 1 deletion docs/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ It might happen that your iSCSI devices/sessions/whatever are in a bad state, fo

In such case, running the following commands should fix the state by removing and recreating devices.

*Please use those commands with **EXTREM CAUTION** and **NEVER IN PRODUCTION** since it can result in data loss.*
*Please use those commands with **EXTREME CAUTION** and **NEVER IN PRODUCTION** since it can result in data loss.*

```sh
iscsiadm -m node --logout all
Expand All @@ -23,3 +23,9 @@ In order to fix this issue, paste the following line in your `value.yaml` and up
```yaml
kubeletPath: /opt/rke/var/lib/kubelet
```
## Multipathd segfault or a volume got corrupted
It's a known fact that when `multipathd` segfaults, it can produce wrong mappings of device paths. When such a multipathed device is mounted, it can result in a corruption of the filesystem. Some checks were added to ensure that the different paths are consistent and lead to the same volume in the appliance.

If you still get this issue, please check that the candidate for the package `multipath-tools` on your host is on the same version as in the container. You can do so by running `apt-cache policy multipath-tools` on your host as well as in the container `multipathd` from one of the pod `dothill-node-server-xxxxx`.
28 changes: 28 additions & 0 deletions docs/volume-snapshots.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Volume snapshots

## Installation

In order to enable volume snapshotting feature one your cluster, you first need to install the snapshot-controller as well as snapshot CRDs. You can do so by following those [instructions](https://github.com/kubernetes-csi/external-snapshotter#usage).

You will also need to install the snapshot validation webhook, by following those [instructions](https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/webhook-example).

## Create a snapshot

To create a snapshot of a volume, you first have to create a `VolumeSnapshotClass`, which is equivalent of a `StorageClass` but for snapshots. Then you can create a `VolumeSnapshot` which use the newly created `VolumeSnapshotClass`. You can follow this [snapshot example](../example/snapshot.yaml). For more informations, please refer to the kubernetes [documentation](https://kubernetes.io/docs/concepts/storage/volume-snapshots/).

## Restore a snapshot

To restore a snapshot, you have to create a new `PersistantVolumeClaim` and specify the desired snapshot as a dataSource. You can find an example [here](https://github.com/kubernetes-csi/external-snapshotter/blob/release-4.0/examples/kubernetes/restore.yaml). You can also refer to the kubernetes [documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support).

## Clone a volume

To clone a volume, you can follow the same procedure than to restore a snapshot, but configure another volume instead of a snapshot. An example can be found [here](https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/examples/csi-clone.yaml) and the kubernetes documentation [here](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-cloning).

References:
- https://kubernetes.io/docs/concepts/storage/volume-snapshots
- https://github.com/kubernetes-csi/external-snapshotter
- https://kubernetes-csi.github.io/docs/snapshot-controller
- https://kubernetes-csi.github.io/docs/snapshot-validation-webhook
- https://kubernetes-csi.github.io/docs/snapshot-restore-feature
- https://kubernetes-csi.github.io/docs/volume-cloning
- https://github.com/kubernetes-csi/external-snapshotter/tree/release-4.0/examples/kubernetes

0 comments on commit 7de0e94

Please sign in to comment.