Following environment variables allows overriding images of pipelines, trigger and task addons components
IMAGE_PIPELINES_<DEPLOYMENT-IMAGE-NAME>
e.g.IMAGE_PIPELINES_WEBHOOK
It allows overriding pipelines or triggers deployment images. Note,*_PIPELINE_*
will override images in pipelines manifest only. Same wayIMAGE__TRIGGERS_*
IMAGE_PIPELINES__ARG_<DEPLOYMENT-IMAGE-ARG-NAME>
e.g.IMAGE_PIPELINES_ARG_NOP
It allows overriding pipelines or triggers deployment images of containersargs
. Note,*_PIPELINE_ARG_*
will override images in pipelines manifest only. Same wayIMAGE__TRIGGERS_ARG_*
IMAGE_ADDONS_<STEP-NAME>
e.g.IMAGE_ADDONS_PUSH
It allows overridingClusterTask
addons steps images.IMAGE_ADDONS_PARAM_<NAME>
e.g.IMAGE_ADDONS_PARAM_BUILDER
It allows overridingClusterTask
addons params images. the_PARAM_
will replace the value ofTask.Spec.Param
.
Note: IF IMAGE NAME, IMAGE ARGUMENT, STEP NAME AND PARAMETER NAME HAS "-" IN IT, THEN PLEASE SUBSTITUTE IT BY "_"
Note: BE CAUTIOUS WHILE SUBSTITUTING THE IMAGES. FOR INSTANCE, IF DEPLOYMENT HAS MORE THAN ONE CONTAINER AND CONTAINER HAS SAME NAME AS DEFINED IN THE OVERRIDE IMAGE CONFIGURATION, THEN THIS COULD RESULT IN UNWATED IMAGE SUBSTITUTATION
The Go tools require that you clone the repository to the
src/github.com/openshift/tektoncd-pipeline-operator
directory in your
GOPATH
.
To check out this repository:
- Create your own fork of this repo
- Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/openshift
cd ${GOPATH}/src/github.com/openshift
git clone [email protected]:${YOUR_GITHUB_USERNAME}/tektoncd-pipeline-operator.git
cd tektoncd-pipeline-operator
git remote add upstream [email protected]:tektoncd/tektoncd-pipeline-operator.git
git remote set-url --push upstream no_push
You must install these tools:
go
: The language Tektoncd-pipeline-operator is built ingit
: For source controlkubectl
: For interacting with your kube cluster- operator-sdk: https://github.com/operator-framework/operator-sdk
- minikube: https://kubernetes.io/docs/tasks/tools/install-minikube/
Create minikube instance
minikube start -p mk-tekton \
--cpus=4 --memory=8192 --kubernetes-version=v1.12.0 \
--extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook" \
--extra-config=apiserver.service-node-port-range=80-32767
Set the shell environment up for the container runtime
eval $(minikube docker-env -p mk-tekton)
- Change directory to '${GOPATH}/src/github.com/openshift/tektoncd-pipeline-operator'
cd ${GOPATH}/src/github.com/openshift/tektoncd-pipeline-operator
- Build go and the container image
make osdk-image IMAGE_TAG=${YOUR_REGISTRY}/openshift-pipelines-operator:${YOUR_IMAGE_TAG}
- Push the container image
docker push ${YOUR_REGISTRY}/openshift-pipelines-operator:${YOUR_IMAGE_TAG}
- Edit the 'image' value in deploy/operator.yaml to match to your image
Clone OLM repository (into go path)
git clone [email protected]:operator-framework/operator-lifecycle-manager.git \
$GOPATH/src/github.com/operator-framework/
Install OLM
Ensure minikube is installed and docker env is set see above
cd $GOPATH/src/github.com/operator-framework/operator-lifecycle-manager
GO111MODULE=on NO_MINIKUBE=true make run-local
NOTE: NO_MINIKUBE=true: we don't want to start a new minikube instance while installing OLM
Launch web console
Open a new terminal
cd $GOPATH/src/github.com/operator-framework/operator-lifecycle-manager
./scripts/run_console_local.sh
-
Change directory to
${GOPATH}/src/github.com/openshift/tektoncd-pipeline-operator
-
Create
openshift-operators
namespacekubectl create namespace openshift-operators
-
Apply operator crd
kubectl apply -f deploy/crds/*_crd.yaml
-
Deploy the operator
kubectl apply -f deploy/ -n openshift-operators
-
Install pipeline by creating an
Install
CRkubectl apply -f deploy/crds/*_cr.yaml
-
Install minikube see above
-
Install olm see above
-
Create
openshift-operators
namespacekubectl create namespace openshift-operators
-
Generate local catalog source
NAMESPACE=operators ./scripts/olm_catalog.sh > olm/openshift-pipelines-operator.resources.yaml
-
Add local catalog source
kubectl apply -f olm/openshift-pipelines-operator.resources.yaml
Once the CatalogSource has been applied, you should find it under
Catalog > Operator Management
of the web console -
Subscribe to
Openshift Pipelines Operator
-
Open web console
-
Scroll down to
Openshift Pipelines Operator
underOpenshift Pipelines Operator Registry
NOTE: it will take a few minutes to appear after applying the
catalogsource
-
Click
Create Subscription
button-
ensure
namespace
in yaml isopenshift-operator
e.g.sample subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: generateName: openshift-pipelines-operator- namespace: openshift-operators spec: source: openshift-pipelines-operator-registry sourceNamespace: openshift-operators name: openshift-pipelines-operator startingCSV: openshift-pipelines-operator.v0.3.1 channel: alpha
-
Click
Create
button at the bottom
-
-
-
Verify operator is installed successfully
- Select
Catalog > Installed operators
- Look for
Status
InstallSucceeded
- Select
-
Install Tektoncd-Pipeline by creating an
install
CR-
Select
Catalog > Developer Catalog
, you should findOpenshift Pipelines Install
-
Click on it and it should show the Operator Details Panel
-
Click on
Create
which show an example as belowexample
apiVersion: tekton.dev/v1alpha1 kind: Install metadata: name: pipelines-install namespace: openshift-pipelines-operator spec: {}
NOTE: This will install Openshift Pipeline resources in
Tekton-Pipelines
Namespace -
Verify that the pipeline is installed
-
Ensure pipeline pods are running
kubectl get all -n tekton-pipelines
-
Ensure pipeline crds exist
kubectl get crds | grep tekton
should show
clustertasks.tekton.dev installs.tekton.dev pipelineresources.tekton.dev pipelineruns.tekton.dev pipelines.tekton.dev taskruns.tekton.dev tasks.tekton.dev
-
NOTE: Now TektonCD Pipelines can be created and run
-
This section explains how to test changes to the operator by executing the entire end-to-end workflow of edit, test, build, package, etc...
It assumes you have already followed install minikube and OLM.
- Make changes to the operator
- Test operator locally with
operator-sdk up local
- Build operator image
operator-sdk build <imagename:tag>
- Update image reference in
deploy/operator.yaml
- Update image reference in CSV
deploy/olm-catalog/openshift-pipelines-operator/0.3.1/openshift-pipelines-operator.v0.3.1.clusterserviceversion.yaml
-
- Build local catalog source localOperators
NAMESPACE=operators ./scripts/olm_catalog.sh > olm/openshift-pipelines-operator.resources.yaml