Unleash Enterprise is a feature flag service for the enterprise. It adds additional functionality on top of the open source edition of Unleash. Unleash gives you a great overview of all feature toggles across all your applications and services. It comes with official client implementations for Java, Node.js, Go, Ruby, Python, .Net and Rust.
For more information on Unleash, see the Unleash-hosted website.
The Unleash application contains:
- An Application resource, which collects all the deployment resources into one logical entity
- A ServiceAccount for the Unleash and PostgreSQL Pod.
- A Secret with the PostgreSQL initial random password
- A StatefulSet with Unleash and PostgreSQL.
- A PersistentVolume and PersistentVolumeClaim for Unleash and PostgreSQL. Note that these resources won't be deleted when you delete the application. If you delete the installation and recreate it with the same name, the new installation uses the same PersistentVolumes. As a result, there is no new database initialization, and no new password is set.
- A Service, which exposes PostgreSQL and Unleash to usage in cluster
PostgreSQL exposes a clusterIP that makes it available for Unleash within the cluster network. Unleash exposes a clusterIP that makes it available within the network. The steps to connect to your Unleash application are described later in this readme. All the data and extensions of Unleash and PostgreSQL are stored on the PersistentVolumeClaim.
Get up and running with a few clicks! Install this Unleash application to a Google Kubernetes Engine cluster using Google Cloud Marketplace. Follow the on-screen instructions.
You can use Google Cloud Shell or a local workstation to complete these steps.
In the google marketplace, under the unleash configuration. Choose "deploy via commandline" and generate and download your license key. This will be used to configure the license secret in your kubernetes cluster. img
You'll need the following tools in your environment. If you are using Cloud Shell, these tools are installed in your environment by default.
Configure gcloud
as a Docker credential helper:
gcloud auth configure-docker
Create a cluster from the command line. If you already have a cluster that you want to use, this step is optional.
export CLUSTER=unleash-cluster
export ZONE=europe-west1-c
gcloud container clusters create "$CLUSTER" --zone "$ZONE"
gcloud container clusters get-credentials "$CLUSTER" --zone "$ZONE"
Clone this repo and the associated tools repo:
git clone --recursive https://github.com/unleash-hosted/unleash-hosted-gcp-marketplace
An Application resource is a collection of individual Kubernetes components, such as Services, Deployments, and so on, that you can manage as a group.
To set up your cluster to understand Application resources, run the following command:
kubectl apply -f "https://raw.githubusercontent.com/GoogleCloudPlatform/marketplace-k8s-app-tools/master/crd/app-crd.yaml"
You need to run this command once for each cluster.
The Application resource is defined by the Kubernetes SIG-apps community. The source code can be found on github.com/kubernetes-sigs/application.
Find your downloaded license.yaml, which you should have generated when you configured unleash through the google cloud platform and run the following command to insert the license key into the cluster:
kubectl apply -f license.yaml
After you have installed your license key run the following command to retrieve the secret name:
kubectl get secrets
// output
NAME TYPE DATA AGE
unleash-enterprise-1-license Opaque 3 31m
Retrieve the secret name and set the environment variable in your shell enviroment:
export LICENSE_SECRET_NAME=unleash-enterprise-1-license
Choose an instance name and
namespace
for the application. In most cases, you can use the default
namespace.
export APP_INSTANCE_NAME=unleash-1
export NAMESPACE=default
Set up the image tag:
It is advised to use stable image reference which you can find on Marketplace Container Registry. Example:
export TAG="3.5.3"
Alternatively you can use short tag which points to the latest image for selected version.
Warning: this tag is not stable and referenced image might change over time.
export TAG="3.3"
Configure the container images:
export IMAGE_UNLEASH="marketplace.gcr.io/bricks-software-public/unleash-enterprise"
export IMAGE_POSTGRESQL="marketplace.gcr.io/bricks-software-public/unleash-enterprise/postgresql:$TAG"
export IMAGE_METRICS_EXPORTER="marketplace.gcr.io/bricks-software-public/unleash-enterprise/prometheus-to-sd:$TAG"
export IMAGE_UBB_AGENT="marketplace.gcr.io/bricks-software-public/unleash-enterprise/ubbagent:$TAG"
Generate random password for PostgreSQL:
export POSTGRESQL_DB_PASSWORD=$(openssl rand -base64 32 | tr -cd '[:alpha:]\n' | head -c 12 | openssl base64)
Enable Stackdriver Metrics Exporter:
NOTE: Your GCP project should have Stackdriver enabled. For non-GCP clusters, export of metrics to Stackdriver is not supported yet.
By default the integration is disabled. To enable, change the value to true
.
export METRICS_EXPORTER_ENABLED=false
Use helm template
to expand the template. We recommend that you save the
expanded manifest file for future updates to the application.
helm template "$APP_INSTANCE_NAME" chart/unleash-enterprise \
--namespace "$NAMESPACE" \
--set unleash.image.repo="$IMAGE_UNLEASH" \
--set unleash.image.tag="$TAG" \
--set postgresql.image="$IMAGE_POSTGRESQL" \
--set postgresql.db.password="$POSTGRESQL_DB_PASSWORD" \
--set reportingSecret="$LICENSE_SECRET_NAME" \
--set ubbagent.image="$IMAGE_UBB_AGENT" \
--set metrics.image="$IMAGE_METRICS_EXPORTER" \
--set metrics.exporter.enabled="$METRICS_EXPORTER_ENABLED" \
> "${APP_INSTANCE_NAME}_manifest.yaml"
Use kubectl
to apply the manifest to your Kubernetes cluster:
kubectl apply -f "${APP_INSTANCE_NAME}_manifest.yaml" --namespace "${NAMESPACE}"
To get the GCP Console URL for your app, run the following command:
echo "https://console.cloud.google.com/kubernetes/application/${ZONE}/${CLUSTER}/${NAMESPACE}/${APP_INSTANCE_NAME}"
To view the app, open the URL in your browser.
By default, the application is not exposed externally. To get access to Unleash UI, run the following command:
kubectl port-forward --namespace $NAMESPACE svc/$APP_INSTANCE_NAME-unleash-enterprise-svc 4242:4242
Then, open http://localhost:4242/.
The application is configured to expose its metrics using the Prometheus format.
You can access the metrics at [UNLEASH_CLUSTER_IP]:4242/internal-backstage/prometheus
, where
[UNLEASH_CLUSTER_IP]
is the IP address of the application on Kubernetes
cluster.
Prometheus can be configured to automatically collect the application's metrics. Follow the steps in Configuring Prometheus.
You configure the metrics in the
scrape_configs
section.
The deployment includes a
Prometheus to Stackdriver (prometheus-to-sd
)
container. If you enabled the option to export metrics to Stackdriver, the
metrics are automatically exported to Stackdriver and visible in
Stackdriver Metrics Explorer.
The name of each metric starts with the application's name, which you define in
the APP_INSTANCE_NAME
environment variable.
The export option might not be available for GKE On-prem clusters.
Note: Stackdriver has quotas for the number of custom metrics created in a single GCP project. If the quota is met, additional metrics might not show up in the Stackdriver Metrics Explorer.
You can remove existing metric descriptors using Stackdriver's REST API.
Unleash-enterprise artifact is a stateless deploment, you may increse the number of replicas to handle more load.
kubectl scale deployment.v1.apps/$APP_INSTANCE_NAME-unleash-enterprise --replicas=3
There are 2 core components in the Unleash-enterprise platform:
- Unleash-enterpruie Server, which containts these parts
- A web server for the Uneash UI, where you configure your feature toggles
- A api used by application to query feature toogle configuration
- The Unleash database
To back up the application, you must back up the database.
Your Unleash-enterprise configuration and project data is stored in the PostgreSQL database.
The following script creates a postgresql/backup.sql
file with the contents of the database.
mkdir postgresql
kubectl --namespace $NAMESPACE exec -t \
$(kubectl -n$NAMESPACE get pod -oname | \
sed -n /\\/$APP_INSTANCE_NAME-postgresql/s.pods\\?/..p) \
-c postgresql-server \
-- pg_dumpall -c -U postgres > postgresql/backup.sql
Use this command to see a base64-encoded version of your PostgreSQL password:
kubectl get secret $APP_INSTANCE_NAME-secret --namespace $NAMESPACE -o yaml | grep password:
Before you restore the PostgreSQL database, we recommend closing all incoming connections to the database.
-
The following command blocks incoming database connections:
kubectl --namespace $NAMESPACE exec -t \ $(kubectl -n$NAMESPACE get pod -oname | \ sed -n /\\/$APP_INSTANCE_NAME-postgresql/s.pods\\?/..p) \ -c postgresql-server \ -- psql -U postgres -c "update pg_database set datallowconn = false where datname = 'unleash';"
-
To ensure data consistency, use this command to drop all active connections:
kubectl --namespace $NAMESPACE exec -t \ $(kubectl -n$NAMESPACE get pod -oname | \ sed -n /\\/$APP_INSTANCE_NAME-postgresql/s.pods\\?/..p) \ -c postgresql-server \ -- psql -U postgres -c "select pg_terminate_backend(pid) from pg_stat_activity where datname='unleash';"
-
Use this command to restore your data from
postgresql/backup.sql
:cat postgresql/backup.sql | kubectl --namespace $NAMESPACE exec -i \ $(kubectl -n$NAMESPACE get pod -oname | \ sed -n /\\/$APP_INSTANCE_NAME-postgresql/s.pods\\?/..p) \ -c postgresql-server \ -- psql -U postgres
-
Use the following command to copy data files from your local folder to
$UNLEASH_HOME/data
in the Unleash-enterprise Pod:kubectl --namespace $NAMESPACE cp data $(kubectl -n$NAMESPACE get pod -oname | \ sed -n /\\/$APP_INSTANCE_NAME-unleash/s.pods\\?/..p):/opt/unleash/data
-
Delete the unneeded Unleash application data:
kubectl --namespace $NAMESPACE exec -i \ $(kubectl -n$NAMESPACE get pod -oname | \ sed -n /\\/$APP_INSTANCE_NAME-unleash/s.pods\\?/..p) \ -- bash -c "rm -rf /opt/unleash/data/es5/* "
-
Enable incoming connections for the
unleash
database schema:kubectl --namespace $NAMESPACE exec -t \ $(kubectl -n$NAMESPACE get pod -oname | \ sed -n /\\/$APP_INSTANCE_NAME-postgresql/s.pods\\?/..p) \ -c postgresql-server \ -- psql -U postgres -c "update pg_database set datallowconn = true where datname = 'unleash';"
-
Patch a Secret to restore your database password:
kubectl --namespace $NAMESPACE patch secret unleash-1-secret -p '{"data": {"password": "'"$ENCODED_PASS"'"}}'
where
$ENCODED_PASS
is variable with the base64-encoded password that you backed up. -
Finally, restart the Unleash Pod:
kubectl --namespace $NAMESPACE exec -i $(kubectl -n$NAMESPACE get pod -oname | \ sed -n /\\/$APP_INSTANCE_NAME-unleash/s.pods\\?/..p) \ -- bash -c "kill -1 1"
-
In the GCP Console, open Kubernetes Applications.
-
From the list of applications, click unleash-1.
-
On the Application Details page, click Delete.
Set your installation name and Kubernetes namespace:
export APP_INSTANCE_NAME=unleash-1
export NAMESPACE=default
NOTE: We recommend to use a
kubectl
version that is the same as the version of your cluster. Using the same versions ofkubectl
and the cluster helps avoid unforeseen issues.
To delete the resources, use the expanded manifest file used for the installation.
Run kubectl
on the expanded manifest file:
kubectl delete -f ${APP_INSTANCE_NAME}_manifest.yaml --namespace $NAMESPACE
Otherwise, delete the resources using types and a label:
kubectl delete application,deployment,statefulset,service,pvc,secret \
--namespace $NAMESPACE \
--selector app.kubernetes.io/name=$APP_INSTANCE_NAME
By design, the removal of StatefulSets in Kubernetes does not remove PersistentVolumeClaims that were attached to their Pods. This prevents your installations from accidentally deleting stateful data.
To remove the PersistentVolumeClaims with their attached persistent disks, run
the following kubectl
commands:
for pv in $(kubectl get pvc --namespace $NAMESPACE \
--selector app.kubernetes.io/name=$APP_INSTANCE_NAME \
--output jsonpath='{.items[*].spec.volumeName}');
do
kubectl delete pv/$pv --namespace $NAMESPACE
done
kubectl delete persistentvolumeclaims \
--namespace $NAMESPACE \
--selector app.kubernetes.io/name=$APP_INSTANCE_NAME
Optionally, if you don't need the deployed application or the GKE cluster, delete the cluster using this command:
gcloud container clusters delete "$CLUSTER" --zone "$ZONE"