diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/gke-cluster-for-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/gke-cluster-for-helm.md index 7b6dc41940..1ee40629a7 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/gke-cluster-for-helm.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/gke-cluster-for-helm.md @@ -142,6 +142,64 @@ gke-helm-arm64-cluster-default-pool-f4ab8a2d-5ldp Ready 5h54m v1 All nodes should be in **Ready** state and the Kubernetes control plane should be accessible. +### Taint the cluster nodes for arm64 support + +Taint the nodes to ensure proper scheduling on arm64 VMs. For each node starting with **gke**, run the following taint command. + +{{% notice Note %}} +Note the required "-" at the end... its needed! +{{% /notice %}} + +For example using the node IDs in the output above: + +```console +kubectl taint nodes gke-helm-arm64-cluster-default-pool-f4ab8a2d-5h6f kubernetes.io/arch=arm64:NoSchedule- +kubectl taint nodes gke-helm-arm64-cluster-default-pool-f4ab8a2d-5ldp kubernetes.io/arch=arm64:NoSchedule- +``` + +Replace the node names with your actual node names from the previous command output. + +### Create hyperdisk storage class for our cluster + +In order to use the c4a architecture with our cluster, a new storage class must be created. + +Create a new file, hyperdisk.yaml, with this content: +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: my-hyperdisk-sc +provisioner: pd.csi.storage.gke.io +parameters: + type: hyperdisk-balanced # Or hyperdisk-ssd, etc. +reclaimPolicy: Delete +volumeBindingMode: WaitForFirstConsumer +``` + +Apply the hyperdisk.yaml file to the cluster: + +```console +kubectl apply -f ./hyperdisk.yaml +``` + +Confirm that the new storage class has been added: + +```console +kubectl get storageclass +``` + +The output should contain the new **my-hyperdisk-sc** storage class: + +```output +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +my-hyperdisk-sc pd.csi.storage.gke.io Delete WaitForFirstConsumer false 7m27s +premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 20m +standard kubernetes.io/gce-pd Delete Immediate true 20m +standard-rwo (default) pd.csi.storage.gke.io Delete WaitForFirstConsumer true 20m +``` + +The new storage class will be used in the next section. + ## What you've accomplished and what's next You've successfully prepared your GKE environment by installing and configuring the Google Cloud SDK, creating a GKE cluster, connecting kubectl to the cluster, and verifying cluster access. Your environment is now ready to deploy applications using Helm charts. diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/nginx-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/nginx-helm.md index 1d4b0bc55e..adbaffc61e 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/nginx-helm.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/nginx-helm.md @@ -26,6 +26,18 @@ my-nginx/ └── templates/ ``` +### Clean templates + +The default Helm chart includes several files that aren't required for a basic Nginx deployment. Remove the following files from `my-nginx/templates/` to avoid unnecessary complexity and template errors: ingress.yaml, hpa.yaml, serviceaccount.yaml, tests/, NOTES.txt, and httproute.yaml. + +```console +cd ./my-nginx/templates +rm -rf hpa.yaml ingress.yaml serviceaccount.yaml tests/ NOTES.txt httproute.yaml +cd $HOME/helm-microservices +``` + +Only ngnix-specific templates will be maintained. + ### Configure values.yaml Replace the contents of `my-nginx/values.yaml` with the following to define configurable parameters including the NGINX image, service type, and public port: @@ -92,21 +104,17 @@ A LoadBalancer provides a public IP required for browser access and is a common ### Install & Access ```console +cd $HOME/helm-microservices helm install nginx ./my-nginx ``` ```output NAME: nginx -LAST DEPLOYED: Tue Jan 6 07:55:52 2026 +LAST DEPLOYED: Tue Jan 20 20:07:47 2026 NAMESPACE: default STATUS: deployed REVISION: 1 -NOTES: -1. Get the application URL by running these commands: - NOTE: It may take a few minutes for the LoadBalancer IP to be available. - You can watch its status by running 'kubectl get --namespace default svc -w nginx-my-nginx' - export SERVICE_IP=$(kubectl get svc --namespace default nginx-my-nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}") - echo http://$SERVICE_IP:80 +TEST SUITE: None ``` ### Access NGINX from a browser @@ -120,11 +128,11 @@ kubectl get svc Wait until **EXTERNAL-IP** is assigned. ```output -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kubernetes ClusterIP 34.118.224.1 443/TCP 3h22m -nginx-my-nginx LoadBalancer 34.118.239.19 34.63.103.125 80:31501/TCP 52s -postgres-app-my-postgres ClusterIP 34.118.225.2 5432/TCP 13m -redis-my-redis ClusterIP 34.118.234.155 6379/TCP 6m53s +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 34.118.224.1 443/TCP 42m +nginx-my-nginx LoadBalancer 34.118.238.110 34.61.85.5 80:30954/TCP 69s +postgres-app-my-postgres ClusterIP 34.118.233.240 5432/TCP 27m +redis-my-redis ClusterIP 34.118.229.221 6379/TCP 8m24s ``` Open the external IP in your browser: diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/postgresql-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/postgresql-helm.md index 8fa4a3380e..b0a1fee898 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/postgresql-helm.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/postgresql-helm.md @@ -105,7 +105,7 @@ This approach prevents hard-coding credentials and follows Kubernetes security b ### Create pvc.yaml -Create `my-postgres/templates/pvc.yaml` with the following content to request persistent storage so PostgreSQL data remains available even if the pod restarts: +Create `my-postgres/templates/pvc.yaml` with the following content to request persistent storage so PostgreSQL data remains available even if the pod restarts. Note the specification of the storage class that will be used **my-hyperdisk-sc** which was created and added to our cluster in the previous section. This hyperdisk-based storage class is required for the c4a architecture: ```yaml apiVersion: v1 @@ -115,6 +115,7 @@ metadata: spec: accessModes: - ReadWriteOnce + storageClassName: my-hyperdisk-sc resources: requests: storage: {{ .Values.persistence.size }} @@ -211,31 +212,6 @@ REVISION: 1 TEST SUITE: None ``` -### Taint the nodes - -Taint the nodes to ensure proper scheduling. First, list the nodes: - -```console -kubectl get nodes -``` - -The output is similar to: - -```output -NAME STATUS ROLES AGE VERSION -gke-helm-arm64-cluster-default-pool-7400f0d3-dq80 Ready 10m v1.33.5-gke.2072000 -gke-helm-arm64-cluster-default-pool-7400f0d3-v3c9 Ready 10m v1.33.5-gke.2072000 -``` - -For each node starting with **gke**, run the taint command. For example: - -```console -kubectl taint nodes gke-helm-arm64-cluster-default-pool-7400f0d3-dq80 kubernetes.io/arch=arm64:NoSchedule- -kubectl taint nodes gke-helm-arm64-cluster-default-pool-7400f0d3-v3c9 kubernetes.io/arch=arm64:NoSchedule- -``` - -Replace the node names with your actual node names from the previous command output. - ### Check the runtime status Check the pod and PVC status: diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/redis-helm.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/redis-helm.md index 3bfb6a394e..765154f31c 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/redis-helm.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/redis-helm.md @@ -29,11 +29,11 @@ my-redis/ ### Clean templates -The default Helm chart includes several files that aren't required for a basic Redis deployment. Remove the following files from `my-redis/templates/` to avoid unnecessary complexity and template errors: ingress.yaml, hpa.yaml, serviceaccount.yaml, tests/, and NOTES.txt. +The default Helm chart includes several files that aren't required for a basic Redis deployment. Remove the following files from `my-redis/templates/` to avoid unnecessary complexity and template errors: ingress.yaml, hpa.yaml, serviceaccount.yaml, tests/, NOTES.txt, and httproute.yaml. ```console cd ./my-redis/templates -rm -rf hpa.yaml ingress.yaml serviceaccount.yaml tests/ NOTES.txt +rm -rf hpa.yaml ingress.yaml serviceaccount.yaml tests/ NOTES.txt httproute.yaml cd $HOME/helm-microservices ``` @@ -113,9 +113,15 @@ ClusterIP is used because Redis is intended for internal communication only with Install Redis and validate that it's running and responding correctly: ```console +cd $HOME/helm-microservices helm install redis ./my-redis +``` + +Confirm that the redis pod is operating: + +```console +kubectl get pods kubectl get svc -kubectl exec -it -- redis-cli ping ``` You should see an output similar to: @@ -124,13 +130,25 @@ NAME READY STATUS RESTARTS AGE postgres-app-my-postgres-6dbc8759b6-jgpxs 1/1 Running 0 6m38s redis-my-redis-75c88646fb-6lz8v 1/1 Running 0 13s ->kubectl get svc -redis-my-redis ClusterIP 34.118.234.155 6379/TCP 6m14s +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 34.118.224.1 443/TCP 37m +postgres-app-my-postgres ClusterIP 34.118.233.240 5432/TCP 22m +redis-my-redis ClusterIP 34.118.229.221 6379/TCP 3m19s +``` + +Finally, execute a sample ping via redis: -> kubectl exec -it redis-my-redis-75c88646fb-6lz8v -- redis-cli ping +```console +kubectl exec -it -- redis-cli ping +``` + +You should see an output similar to: + +```output PONG ``` + The Redis pod should be in **Running** state and the service should be **ClusterIP** type. ## What you've accomplished and what's next