Skip to content

Conversation

luismacosta
Copy link

@luismacosta luismacosta commented Mar 6, 2025

[etcd-operator] helm chart: based on folder config, here is an idea for the helm chart

tested in eks 1.31

➜  cd helm
➜  helm lint .  

==> Linting .

1 chart(s) linted, 0 chart(s) failed

➜  helm template . --name-template etcd-operator -n etcd-operator -f values.yaml > test.yaml --debug

install.go:224: 2025-03-04 17:11:08.163499 +0000 WET m=+0.021731792 [debug] Original chart version: ""
install.go:241: 2025-03-04 17:11:08.16369 +0000 WET m=+0.021923126 [debug] CHART PATH: /Users/luis.costa/projects/etcd-operator/helm


➜  ~ kubectl create ns etcd-operator

namespace/etcd-operator created


➜  ~ kubectl apply -f /Users/luis.costa/projects/etcd-operator/helm/crds/operator.etcd.io_etcdclusters.yaml

customresourcedefinition.apiextensions.k8s.io/etcdclusters.operator.etcd.io created


➜  ~ kubectl apply -f /Users/luis.costa/projects/etcd-operator/helm/test.yaml -n etcd-operator

serviceaccount/controller-manager created
clusterrole.rbac.authorization.k8s.io/etcd-operator-editor-role created
clusterrole.rbac.authorization.k8s.io/etcd-operator-viewer-role created
clusterrole.rbac.authorization.k8s.io/etcd-operator-metrics-auth-role created
clusterrole.rbac.authorization.k8s.io/etcd-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/etcd-operator-role created
clusterrolebinding.rbac.authorization.k8s.io/etcd-operator-metrics-auth-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/etcd-operator-rolebinding created
role.rbac.authorization.k8s.io/etcd-operator-leader-election-role created
rolebinding.rbac.authorization.k8s.io/etcd-operator-leader-election-rolebinding created
deployment.apps/etcd-operator-controller-manager created
clusterrole.rbac.authorization.k8s.io/manager-role created


➜  ~ kubectl get pods -n etcd-operator

NAME                                                READY   STATUS    RESTARTS   AGE
etcd-operator-controller-manager-64f8784d85-sh9cq   1/1     Running   0          29s


➜  ~ kubectl logs etcd-operator-controller-manager-64f8784d85-sh9cq -n etcd-operator -f

2025-03-06T18:23:56Z	INFO	setup	starting manager
2025-03-06T18:23:56Z	INFO	starting server	{"name": "health probe", "addr": "[::]:8081"}
I0306 18:23:56.757224       1 leaderelection.go:257] attempting to acquire leader lease etcd-operator/cc4a0f4b.etcd.io...
I0306 18:24:11.917739       1 leaderelection.go:271] successfully acquired lease etcd-operator/cc4a0f4b.etcd.io
2025-03-06T18:24:11Z	DEBUG	events	etcd-operator-controller-manager-64f8784d85-sh9cq_27eb28d3-de44-4f08-b31a-d3b980c2e2f4 became leader	{"type": "Normal", "object": {"kind":"Lease","namespace":"etcd-operator","name":"cc4a0f4b.etcd.io","uid":"2a372e0c-0355-4caa-a27e-8ace9118d0c9","apiVersion":"coordination.k8s.io/v1","resourceVersion":"323144474"}, "reason": "LeaderElection"}
2025-03-06T18:24:11Z	INFO	Starting EventSource	{"controller": "etcdcluster", "controllerGroup": "operator.etcd.io", "controllerKind": "EtcdCluster", "source": "kind source: *v1.ConfigMap"}
2025-03-06T18:24:11Z	INFO	Starting EventSource	{"controller": "etcdcluster", "controllerGroup": "operator.etcd.io", "controllerKind": "EtcdCluster", "source": "kind source: *v1.Service"}
2025-03-06T18:24:11Z	INFO	Starting EventSource	{"controller": "etcdcluster", "controllerGroup": "operator.etcd.io", "controllerKind": "EtcdCluster", "source": "kind source: *v1.StatefulSet"}
2025-03-06T18:24:11Z	INFO	Starting EventSource	{"controller": "etcdcluster", "controllerGroup": "operator.etcd.io", "controllerKind": "EtcdCluster", "source": "kind source: *v1alpha1.EtcdCluster"}
2025-03-06T18:24:12Z	INFO	Starting Controller	{"controller": "etcdcluster", "controllerGroup": "operator.etcd.io", "controllerKind": "EtcdCluster"}
2025-03-06T18:24:12Z	INFO	Starting workers	{"controller": "etcdcluster", "controllerGroup": "operator.etcd.io", "controllerKind": "EtcdCluster", "worker count": 1}


➜ kubectl create ns etcd-test
namespace/etcd-test created


➜  kubectl apply -f /Users/luis.costa/projects/etcd-operator/helm/tests/sample.yaml -n etcd-test
etcdcluster.operator.etcd.io/etcdcluster-sample created

➜  kubectl get EtcdCluster -n etcd-test
NAME                 AGE
etcdcluster-sample   40s


➜  kubectl get all -n etcd-test

NAME                       READY   STATUS    RESTARTS   AGE
pod/etcdcluster-sample-0   1/1     Running   0          2m51s
pod/etcdcluster-sample-1   1/1     Running   0          2m43s
pod/etcdcluster-sample-2   1/1     Running   0          2m32s

NAME                         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/etcdcluster-sample   ClusterIP   None         <none>        <none>    2m48s

NAME                                  READY   AGE
statefulset.apps/etcdcluster-sample   3/3     2m52s


➜  kubectl get cm -n etcd-test

NAME                       DATA   AGE
etcdcluster-sample-state   3      3m18s
kube-root-ca.crt           1      6m33s


➜  kubectl exec -it etcdcluster-sample-0 -n etcd-test -- etcdctl member list

799d0cd98f5cac87, started, etcdcluster-sample-0, http://etcdcluster-sample-0.etcdcluster-sample.etcd-test.svc.cluster.local:2380, http://etcdcluster-sample-0.etcdcluster-sample.etcd-test.svc.cluster.local:2379, false
8813bcfdd2ee5043, started, etcdcluster-sample-2, http://etcdcluster-sample-2.etcdcluster-sample.etcd-test.svc.cluster.local:2380, http://etcdcluster-sample-2.etcdcluster-sample.etcd-test.svc.cluster.local:2379, false
fc1c5d2e35eebab9, started, etcdcluster-sample-1, http://etcdcluster-sample-1.etcdcluster-sample.etcd-test.svc.cluster.local:2380, http://etcdcluster-sample-1.etcdcluster-sample.etcd-test.svc.cluster.local:2379, false


➜  curl -v http://<pod_ip>:2379/metrics | grep etcd_cluster_version
  
< HTTP/1.1 200 OK

# HELP etcd_cluster_version Which version is running. 1 for 'cluster_version' label with current cluster version
# TYPE etcd_cluster_version gauge
etcd_cluster_version{cluster_version="3.5"} 1

@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: luismacosta
Once this PR has been reviewed and has the lgtm label, please assign jberkus for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link

Hi @luismacosta. Thanks for your PR.

I'm waiting for a etcd-io member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Signed-off-by: luis.costa <[email protected]>
Signed-off-by: luis.costa <[email protected]>
Signed-off-by: luis.costa <[email protected]>
Signed-off-by: luis.costa <[email protected]>
Signed-off-by: luis.costa <[email protected]>
Signed-off-by: luis.costa <[email protected]>
@luismacosta luismacosta closed this Mar 6, 2025
@luismacosta luismacosta reopened this Mar 6, 2025
@luismacosta
Copy link
Author

luismacosta commented Mar 6, 2025

@ivanvc I've created a docker image locally and deployed ectc-operator in eks 1.31 using the helm chart + an EtcdCluster CR. You may find the results above.

Signed-off-by: luis.costa <[email protected]>
@ivanvc
Copy link
Member

ivanvc commented Mar 6, 2025

Thanks for the pull request, @luismacosta. I summarized what we need to do to provide a Helm chart in #94.

Signed-off-by: luis.costa <[email protected]>
Copy link
Contributor

@frederiko frederiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall as an initial chart.
The question I have is with regards to keep it in sync. For example, under the CRD directory, assuming we run make manifests ... how can we also "automagically" update these charts? Did you consider using the helm plugin available to kubebuilder?

@ahrtr
Copy link
Member

ahrtr commented Mar 12, 2025

Thanks for working on this.

High level I agree that it makes sense to deploy etcd-operator using helm chart. But we need to have a whole plan to migrate the Kustomize to helm as #94 points out.

We need contributors' help to drive this.

@ivanvc
Copy link
Member

ivanvc commented Mar 14, 2025

We need contributors' help to drive this.

I can help lead this project (I have Helm experience). But I'm not sure if I'll have the bandwidth to do it before etcd v3.6.0 and operator v0.1.0. More likely after KubeCon.

@luismacosta luismacosta requested a review from frederiko June 13, 2025 15:07
@cwrau
Copy link

cwrau commented Aug 6, 2025

I'm looking forward to this PR, just wanted to mention that https://github.com/etcd-io/etcd-operator/pull/95/files#diff-b0ab31d17a21c9e80d5a8cceece36ed4ca2f04a62d11303f1350bfa3a93e1daeR24 should be configurable, not every monitoring namespace is going to have that label (ours doesn't)

And https://github.com/etcd-io/etcd-operator/pull/95/files#diff-e564dad3bb1f6b099d8c25ef58f00b705a6c59ad4f9f213f9611c3aac5e5d130 doesn't have the new labels

And various resources (at least the ClusterRole manager-role) doesn't have a prefix

patch
diff --git a/helm/templates/prometheus/monitor.yaml b/helm/templates/prometheus/monitor.yaml
index 277e4d8..86ef340 100644
--- a/helm/templates/prometheus/monitor.yaml
+++ b/helm/templates/prometheus/monitor.yaml
@@ -6,9 +6,7 @@ metadata:
   name: {{ include "etcd-operator.name" . }}-controller-manager-metrics-monitor
   namespace: {{ .Release.Namespace }}
   labels:
-    control-plane: controller-manager
-    app.kubernetes.io/name: etcd-operator
-    app.kubernetes.io/managed-by: kustomize
+    {{- include "etcd-operator.labels" . | nindent 4 }}
 
 spec:
   endpoints:
diff --git a/helm/templates/rbac/manager-clusterrole.yaml b/helm/templates/rbac/manager-clusterrole.yaml
index fda603e..b32c0b3 100644
--- a/helm/templates/rbac/manager-clusterrole.yaml
+++ b/helm/templates/rbac/manager-clusterrole.yaml
@@ -2,7 +2,7 @@
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
-  name: manager-role
+  name: {{ include "etcd-operator.name" . }}-manager-role
 rules:
   - apiGroups: [""]
     resources: ["services", "configmaps", "events"]
diff --git a/helm/templates/rbac/role_binding.yaml b/helm/templates/rbac/role_binding.yaml
index bc24d88..48d850a 100644
--- a/helm/templates/rbac/role_binding.yaml
+++ b/helm/templates/rbac/role_binding.yaml
@@ -7,7 +7,7 @@ metadata:
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
-  name: manager-role
+  name: {{ include "etcd-operator.name" . }}-manager-role
 subjects:
 - kind: ServiceAccount
   name: {{ include "etcd-operator.serviceAccountName" . }}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants