Skip to content

Commit

Permalink
update Readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Schmaetz committed Mar 12, 2024
1 parent a494a67 commit 56b0114
Show file tree
Hide file tree
Showing 211 changed files with 76,003 additions and 28 deletions.
139 changes: 139 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# CYBERTEC PG Operator

CPO (CYBERTEC PG Operator) allows you to create and run PostgreSQL clusters on Kubernetes.

The operator reduces your efforts and simplifies the administration of your PostgreSQL clusters so that you can concentrate on other things.
# CYBERTEC PG Operator

CPO (CYBERTEC PG Operator) allows you to create and run PostgreSQL clusters on Kubernetes.

The operator reduces your efforts and simplifies the administration of your PostgreSQL clusters so that you can concentrate on other things.
<img src="docs/diagrams/logo.png" width="200">

The Postgres Operator delivers an easy to run highly-available [PostgreSQL](https://www.postgresql.org/)
clusters on Kubernetes (K8s) powered by [Patroni](https://github.com/zalando/patroni).
It is configured only through Postgres manifests (CRDs) to ease integration into automated CI/CD
pipelines with no access to Kubernetes API directly, promoting infrastructure as code vs manual operations.

### Operator features

* Rolling updates on Postgres cluster changes, incl. quick minor version updates
* Live volume resize without pod restarts (AWS EBS, PVC)
* Database connection pooling with PGBouncer
* Support fast in place major version upgrade. Supports global upgrade of all clusters.
* Restore and cloning Postgres clusters on AWS, GCS and Azure
* Additionally logical backups to S3 or GCS bucket can be configured
* Standby cluster from S3 or GCS WAL archive
* Configurable for non-cloud environments
* Basic credential and user management on K8s, eases application deployments
* Support for custom TLS certificates
* UI to create and edit Postgres cluster manifests
* Support for AWS EBS gp2 to gp3 migration, supporting iops and throughput configuration
* Compatible with OpenShift

### PostgreSQL features

* Supports PostgreSQL 16, starting from 13
* Streaming replication cluster via Patroni
* Point-In-Time-Recovery with
[pg_basebackup](https://www.postgresql.org/docs/11/app-pgbasebackup.html) /
[pgBackRest](https://pgbackrest.org/) via [CYBERTEC-pg-container](https://github.com/zalando/spilo)
[pg_stat_statements](https://www.postgresql.org/docs/15/pgstatstatements.html),
[pgextwlist](https://github.com/dimitri/pgextwlist),
[pg_auth_mon](https://github.com/RafiaSabih/pg_auth_mon)
* Incl. popular Postgres extensions such as
[decoderbufs](https://github.com/debezium/postgres-decoderbufs),
[hypopg](https://github.com/HypoPG/hypopg),
[pg_cron](https://github.com/citusdata/pg_cron),
[pg_partman](https://github.com/pgpartman/pg_partman),
[pg_stat_kcache](https://github.com/powa-team/pg_stat_kcache),
[pgq](https://github.com/pgq/pgq),
[plpgsql_check](https://github.com/okbob/plpgsql_check),
[postgis](https://postgis.net/),
[set_user](https://github.com/pgaudit/set_user) and
[timescaledb](https://github.com/timescale/timescaledb)

The Postgres Operator has been developed at Zalando and is being used in
production for over five years.

## Supported Postgres & K8s versions

The Operator is supporting all current Versions of PostgreSQL, starting with PG 13.
You can get more Information about this in the [Documentation](https://cybertec-postgresql.github.io/CYBERTEC-pg-operator/documentation/release_notes/)
Also we support following K8s-Versions:
- k8s: 1.21 - 1.28
- Openshift: 4.8 - 4.13
The operator is generally compatible with all k8s distributions, but please contact us for specific distributions.


* Integrated backup solution, automatic backups and very easy restore (snapshot & PITR)
* Rolling update procedure for adjustments to the pods and minor updates
* Major upgrade with minimum interruption time
* Reduction of downtime thanks to redundancy, pod anti-affinity, auto-failover and self-healing
* Supports PostgreSQL 15, starting from 10+
* Streaming replication cluster via Patroni
* Point-In-Time-Recovery with
[pg_basebackup](https://www.postgresql.org/docs/11/app-pgbasebackup.html) /
[WAL-E](https://github.com/wal-e/wal-e) via [Spilo](https://github.com/zalando/spilo)
* Preload libraries: [bg_mon](https://github.com/CyberDem0n/bg_mon),
[pg_stat_statements](https://www.postgresql.org/docs/15/pgstatstatements.html),
[pgextwlist](https://github.com/dimitri/pgextwlist),
[pg_auth_mon](https://github.com/RafiaSabih/pg_auth_mon)
* Incl. popular Postgres extensions such as
[decoderbufs](https://github.com/debezium/postgres-decoderbufs),
[hypopg](https://github.com/HypoPG/hypopg),
[pg_cron](https://github.com/citusdata/pg_cron),
[pg_partman](https://github.com/pgpartman/pg_partman),
[pg_stat_kcache](https://github.com/powa-team/pg_stat_kcache),
[pgq](https://github.com/pgq/pgq),
[plpgsql_check](https://github.com/okbob/plpgsql_check),
[postgis](https://postgis.net/),
[set_user](https://github.com/pgaudit/set_user) and
[timescaledb](https://github.com/timescale/timescaledb)

The Postgres Operator has been developed at Zalando and is being used in
production for over five years.

## Supported Postgres & K8s versions

| Release | Postgres versions | K8s versions | Golang |
| :-------- | :---------------: | :---------------: | :-----: |
| v1.10.* | 10 &rarr; 15 | 1.25+ | 1.19.8 |
| v1.9.0 | 10 &rarr; 15 | 1.25+ | 1.18.9 |
| v1.8.* | 9.5 &rarr; 14 | 1.20 &rarr; 1.24 | 1.17.4 |
| v1.7.1 | 9.5 &rarr; 14 | 1.20 &rarr; 1.24 | 1.16.9 |

* Integrated backup solution, automatic backups and very easy restore (snapshot & PITR)
* Rolling update procedure for adjustments to the pods and minor updates
* Major upgrade with minimum interruption time
* Reduction of downtime thanks to redundancy, pod anti-affinity, auto-failover and self-healing

## Getting started

[Getting started - Documentation](https://cybertec-postgresql.github.io/CYBERTEC-pg-operator/documentation/how-to-use/installation/)

[Tutorials](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials).


## Documentation

Coming soon

Until then, please use the following:

There is a browser-friendly version of this documentation at
[postgres-operator.readthedocs.io](https://postgres-operator.readthedocs.io)

* [How it works](docs/index.md)
* [Installation](docs/quickstart.md#deployment-options)
* [The Postgres experience on K8s](docs/user.md)
* [The Postgres Operator UI](docs/operator-ui.md)
* [DBA options - from RBAC to backup](docs/administrator.md)
* [Build, debug and extend the operator](docs/developer.md)
* [Configuration options](docs/reference/operator_parameters.md)
* [Postgres manifest reference](docs/reference/cluster_manifest.md)
* [Command-line options and environment variables](docs/reference/command_line_and_environment.md)

## Community

Coming soon
8 changes: 4 additions & 4 deletions content/_index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
---
title: "CPO (CYBERTEC-PG-Operator)"
date: 2023-03-07T14:26:51+01:00
date: 2024-03-11T14:26:51+01:00
draft: false
---
Current Release: 0.3.0 (xx.xx.xxxx) [Release Notes](/documentation/release_notes)
Current Release: 0.7.0 (xx.xx.xxxx) [Release Notes](/documentation/release_notes)

CPO (CYBERTEC PG Operator) allows you to create and run PostgreSQL clusters on Kubernetes.

Expand All @@ -18,8 +18,8 @@ The following features characterise our operator:
- Reduction of downtime thanks to redundancy, pod anti-affinity, auto-failover and self-healing

CPO is tested on the following platforms:
- Kubernetes 1.21 - 1.24
- Openshift 4.8 - 4.11
- Kubernetes: 1.21 - 1.28
- Openshift: 4.8 - 4.13
- Rancher
- AWS EKS
- Azure AKS
Expand Down
4 changes: 4 additions & 0 deletions content/documentation/cluster/modify-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,3 +103,7 @@ spec:
track_io_timing: "true"
```
These Definitions will change the PostgreSQL-Configuration. Based on the needs of Parameter changes the Pods may needs a restart, which creates a Downtime if its not a HA-Cluster.
You can check Parameters and allowed Values on this Sources to ensure a correct Value.
- PostgreSQL Documentation
- [PostgreSQL.org](https://postgresql.org)
- [PostgreSQLco.nf](https://postgresqlco.nf/)
142 changes: 142 additions & 0 deletions content/documentation/cluster/monitoring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
---
title: "Start with Monitoring"
date: 2023-12-28T14:26:51+01:00
draft: false
---
The CPO-Project has prepared severall Tools which allows to setup a Monitoring-Stack including Alerting and Metric-Viewer.
These Stack is based on:
- Prometheus
- Alertmanager
- Grafana
- exporter-container

CPO has prepared an own Exporter for the PostgreSQl-Pod which can used as a sidecar.

#### Setting up the Monitoring Stack
To setup the Monitoring-Stack we suggest that you create an own namespace and use the prepared kustomization file inside the Operator-Tutorials.
```
$ kubectl create namespace cpo-monitoring
namespace/cpo-monitoring created
$ kubectl get pods -n cpo-monitoring
No resources found in cpo-monitoring namespace.
git clone https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorial
cd CYBERTEC-operator-tutorial/setup/monitoring
# Hint: Please check if youn want to use a specific storage-class the file pvcs.yaml and add your storageclass on the commented part. Please ensure that you removed the comment-char.
$ kubectl apply -n cpo-monitoring -k .
serviceaccount/cpo-monitoring created
serviceaccount/cpo-monitoring-tools created
clusterrole.rbac.authorization.k8s.io/cpo-monitoring unchanged
clusterrolebinding.rbac.authorization.k8s.io/cpo-monitoring unchanged
configmap/alertmanager-config created
configmap/alertmanager-rules-config created
configmap/cpo-prometheus-cm created
configmap/grafana-dashboards created
configmap/grafana-datasources created
secret/grafana-secret created
service/cpo-monitoring-alertmanager created
service/cpo-monitoring-grafana created
service/cpo-monitoring-prometheus created
persistentvolumeclaim/alertmanager-pvc created
persistentvolumeclaim/grafana-pvc created
persistentvolumeclaim/prometheus-pvc created
deployment.apps/cpo-monitoring-alertmanager created
deployment.apps/cpo-monitoring-grafana created
deployment.apps/cpo-monitoring-prometheus created
Hint: If you're not running Openshift you will get a error like this:
error: resource mapping not found for name: "grafana" namespace: "" from ".":
no matches for kind "Route" in version "route.openshift.io/v1" ensure CRDs are installed first
You can ignore this, because it depends on an object with the type route which is part of Openshift.
It is not needed replaced by ingress-rules or an loadbalancer-service.
```

After installing the Monitoring-Stack we're able to check the created pods inside the namespace
```
$ kubectl get pods -n cpo-monitoring
----------------------------------------------------------------------------------------
NAME | READY | STATUS | RESTARTS | AGE
cpo-monitoring-alertmanager-5bb8bc79f7-8pdv4 | 1/1 | Running | 0 | 3m35s
cpo-monitoring-grafana-7c7c4f787b-jbj2f | 1/1 | Running | 0 | 3m35s
cpo-monitoring-prometheus-67969b757f-k26jd | 1/1 | Running | 0 | 3m35s
```
The configuration of this monitoring-stack is based on severall configmaps which can be modified.

#### Prometheus-Configuration


#### Alertmanager-Configuration


#### Grafana-Configuration


#### Configure a PostgreSQL-Cluster to allow Prometheus to gather metrics

To allow Prometheus to gather metrics from your cluster you need to do some small modfications on the Cluster-Manifest.
We need to create the monitor-object for this:
```
kubectl edit postgresqls.cpo.opensource.cybertec.at cluster-1
...
spec:
...
monitor:
image: docker.io/cybertecpostgresql/cybertec-pg-container:exporter-16.2-1
```

The Operator will add automatically the monitoring sidecar to your pods, create a new postgres-user and add some structure inside the postgres-database to enable everthing needed for the Monitoring. Also every Ressource of your Cluster will get a new label: cpo_monitoring_stack=true. This is needed for Prometheus to identify all clusters which should be added to the monitoring.
Removing this label will stop Prometheus to gather data from this cluster.

After changing your Cluster-Manifest the Pods needs to be recreated which is done by a rolling update.
After this you can see that the pod has now more than just one container.

```
kubectl get pods
-----------------------------------------------------------------------------
NAME | READY | STATUS | RESTARTS | AGE
cluster-1-0 | 2/2 | Running | 0 | 54s
cluster-1-1 | 2/2 | Running | 0 | 31s
```
You can check the logs to see that the exporter is working and with curl you can see the output of the exporter.

```
kubectl logs cluster-1-0 -c postgres-exporter
kubectl exec --stdin --tty cluster-1-0 -c postgres-exporter -- /bin/bash
[exporter@cluster-1-0 /]# curl http://127.0.0.1:9187/metrics
```
You can now setup a LoadBalancer-Service or create an Ingress-Rule to allow access von outside to the grafana. Alternativ you can use a port-forward.

##### LoadBalancer or Nodeport

##### Ingress-Rule

##### Port-Forwarding
```
$ kubectl get pods -n cpo-monitoring
----------------------------------------------------------------------------------------
NAME | READY | STATUS | RESTARTS | AGE
cpo-monitoring-alertmanager-5bb8bc79f7-8pdv4 | 1/1 | Running | 0 | 6m42s
cpo-monitoring-grafana-7c7c4f787b-jbj2f | 1/1 | Running | 0 | 6m42s
cpo-monitoring-prometheus-67969b757f-k26jd | 1/1 | Running | 0 | 6m42s
$ kubectl port-forward cpo-monitoring-grafana-7c7c4f787b-jbj2f -n cpo-monitoring 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
```
Call http://localhost:9000 in the [Browser](http://localhost:9000)

##### Use a Route (Openshift only)

```
kubectl get route -n cpo-monitoring
```
Use the Route-Adress to access Grafana
6 changes: 6 additions & 0 deletions content/documentation/operator/migrateToNewApi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
title: "Update to new API from previous Operator-Version"
date: 2023-03-07T14:26:51+01:00
draft: true
---
khjls
6 changes: 0 additions & 6 deletions content/documentation/tutorials/abc.md

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,29 @@ date: 2023-03-07T14:26:51+01:00
draft: false
---

### 0.7.0

#### Features
- Monitoring-Sidecar integrated via CRD [Start with Monitoring](documentation/cluster/monitoring)
- Password-Hash per default set to scram-sha-256

#### Changes
- API Change acid.zalan.do is replaced by cpo.opensource.cybertec.at - If you're updating your Operator from previous Versions, please check this [HowTo Migrate to new API](documentation/operator/migrateToNewApi/)
- Patroni-Compatibility has increased to Version 3.2.2
- pgBackRest-Compatbility has increased to Version 2.50


#### Fixes
- PDB Bug fixed - Single-Node Clusters are not creating PDBs anymore which can break Kubernetes-Update

#### Supported Versions

- PG: 12 - 16
- Patroni: 3.2.2
- pgBackRest: 2.50
- Kubernetes: 1.21 - 1.28
- Openshift: 4.8 - 4.13

### 0.6.1

Release with fixes
Expand Down
Loading

0 comments on commit 56b0114

Please sign in to comment.