Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,64 @@ gke-helm-arm64-cluster-default-pool-f4ab8a2d-5ldp Ready <none> 5h54m v1

All nodes should be in **Ready** state and the Kubernetes control plane should be accessible.

### Taint the cluster nodes for arm64 support

Taint the nodes to ensure proper scheduling on arm64 VMs. For each node starting with **gke**, run the following taint command.

{{% notice Note %}}
Note the required "-" at the end... its needed!
{{% /notice %}}

For example using the node IDs in the output above:

```console
kubectl taint nodes gke-helm-arm64-cluster-default-pool-f4ab8a2d-5h6f kubernetes.io/arch=arm64:NoSchedule-
kubectl taint nodes gke-helm-arm64-cluster-default-pool-f4ab8a2d-5ldp kubernetes.io/arch=arm64:NoSchedule-
```

Replace the node names with your actual node names from the previous command output.

### Create hyperdisk storage class for our cluster

In order to use the c4a architecture with our cluster, a new storage class must be created.

Create a new file, hyperdisk.yaml, with this content:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-hyperdisk-sc
provisioner: pd.csi.storage.gke.io
parameters:
type: hyperdisk-balanced # Or hyperdisk-ssd, etc.
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```

Apply the hyperdisk.yaml file to the cluster:

```console
kubectl apply -f ./hyperdisk.yaml
```

Confirm that the new storage class has been added:

```console
kubectl get storageclass
```

The output should contain the new **my-hyperdisk-sc** storage class:

```output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
my-hyperdisk-sc pd.csi.storage.gke.io Delete WaitForFirstConsumer false 7m27s
premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 20m
standard kubernetes.io/gce-pd Delete Immediate true 20m
standard-rwo (default) pd.csi.storage.gke.io Delete WaitForFirstConsumer true 20m
```

The new storage class will be used in the next section.

## What you've accomplished and what's next

You've successfully prepared your GKE environment by installing and configuring the Google Cloud SDK, creating a GKE cluster, connecting kubectl to the cluster, and verifying cluster access. Your environment is now ready to deploy applications using Helm charts.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,18 @@ my-nginx/
└── templates/
```

### Clean templates

The default Helm chart includes several files that aren't required for a basic Nginx deployment. Remove the following files from `my-nginx/templates/` to avoid unnecessary complexity and template errors: ingress.yaml, hpa.yaml, serviceaccount.yaml, tests/, NOTES.txt, and httproute.yaml.

```console
cd ./my-nginx/templates
rm -rf hpa.yaml ingress.yaml serviceaccount.yaml tests/ NOTES.txt httproute.yaml
cd $HOME/helm-microservices
```

Only ngnix-specific templates will be maintained.

### Configure values.yaml

Replace the contents of `my-nginx/values.yaml` with the following to define configurable parameters including the NGINX image, service type, and public port:
Expand Down Expand Up @@ -92,21 +104,17 @@ A LoadBalancer provides a public IP required for browser access and is a common
### Install & Access

```console
cd $HOME/helm-microservices
helm install nginx ./my-nginx
```

```output
NAME: nginx
LAST DEPLOYED: Tue Jan 6 07:55:52 2026
LAST DEPLOYED: Tue Jan 20 20:07:47 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch its status by running 'kubectl get --namespace default svc -w nginx-my-nginx'
export SERVICE_IP=$(kubectl get svc --namespace default nginx-my-nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo http://$SERVICE_IP:80
TEST SUITE: None
```

### Access NGINX from a browser
Expand All @@ -120,11 +128,11 @@ kubectl get svc
Wait until **EXTERNAL-IP** is assigned.

```output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 34.118.224.1 <none> 443/TCP 3h22m
nginx-my-nginx LoadBalancer 34.118.239.19 34.63.103.125 80:31501/TCP 52s
postgres-app-my-postgres ClusterIP 34.118.225.2 <none> 5432/TCP 13m
redis-my-redis ClusterIP 34.118.234.155 <none> 6379/TCP 6m53s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 34.118.224.1 <none> 443/TCP 42m
nginx-my-nginx LoadBalancer 34.118.238.110 34.61.85.5 80:30954/TCP 69s
postgres-app-my-postgres ClusterIP 34.118.233.240 <none> 5432/TCP 27m
redis-my-redis ClusterIP 34.118.229.221 <none> 6379/TCP 8m24s
```

Open the external IP in your browser:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ This approach prevents hard-coding credentials and follows Kubernetes security b

### Create pvc.yaml

Create `my-postgres/templates/pvc.yaml` with the following content to request persistent storage so PostgreSQL data remains available even if the pod restarts:
Create `my-postgres/templates/pvc.yaml` with the following content to request persistent storage so PostgreSQL data remains available even if the pod restarts. Note the specification of the storage class that will be used **my-hyperdisk-sc** which was created and added to our cluster in the previous section. This hyperdisk-based storage class is required for the c4a architecture:

```yaml
apiVersion: v1
Expand All @@ -115,6 +115,7 @@ metadata:
spec:
accessModes:
- ReadWriteOnce
storageClassName: my-hyperdisk-sc
resources:
requests:
storage: {{ .Values.persistence.size }}
Expand Down Expand Up @@ -211,31 +212,6 @@ REVISION: 1
TEST SUITE: None
```

### Taint the nodes

Taint the nodes to ensure proper scheduling. First, list the nodes:

```console
kubectl get nodes
```

The output is similar to:

```output
NAME STATUS ROLES AGE VERSION
gke-helm-arm64-cluster-default-pool-7400f0d3-dq80 Ready <none> 10m v1.33.5-gke.2072000
gke-helm-arm64-cluster-default-pool-7400f0d3-v3c9 Ready <none> 10m v1.33.5-gke.2072000
```

For each node starting with **gke**, run the taint command. For example:

```console
kubectl taint nodes gke-helm-arm64-cluster-default-pool-7400f0d3-dq80 kubernetes.io/arch=arm64:NoSchedule-
kubectl taint nodes gke-helm-arm64-cluster-default-pool-7400f0d3-v3c9 kubernetes.io/arch=arm64:NoSchedule-
```

Replace the node names with your actual node names from the previous command output.

### Check the runtime status

Check the pod and PVC status:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ my-redis/

### Clean templates

The default Helm chart includes several files that aren't required for a basic Redis deployment. Remove the following files from `my-redis/templates/` to avoid unnecessary complexity and template errors: ingress.yaml, hpa.yaml, serviceaccount.yaml, tests/, and NOTES.txt.
The default Helm chart includes several files that aren't required for a basic Redis deployment. Remove the following files from `my-redis/templates/` to avoid unnecessary complexity and template errors: ingress.yaml, hpa.yaml, serviceaccount.yaml, tests/, NOTES.txt, and httproute.yaml.

```console
cd ./my-redis/templates
rm -rf hpa.yaml ingress.yaml serviceaccount.yaml tests/ NOTES.txt
rm -rf hpa.yaml ingress.yaml serviceaccount.yaml tests/ NOTES.txt httproute.yaml
cd $HOME/helm-microservices
```

Expand Down Expand Up @@ -113,9 +113,15 @@ ClusterIP is used because Redis is intended for internal communication only with
Install Redis and validate that it's running and responding correctly:

```console
cd $HOME/helm-microservices
helm install redis ./my-redis
```

Confirm that the redis pod is operating:

```console
kubectl get pods
kubectl get svc
kubectl exec -it <redis-pod> -- redis-cli ping
```

You should see an output similar to:
Expand All @@ -124,13 +130,25 @@ NAME READY STATUS RESTARTS AGE
postgres-app-my-postgres-6dbc8759b6-jgpxs 1/1 Running 0 6m38s
redis-my-redis-75c88646fb-6lz8v 1/1 Running 0 13s

>kubectl get svc
redis-my-redis ClusterIP 34.118.234.155 <none> 6379/TCP 6m14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 34.118.224.1 <none> 443/TCP 37m
postgres-app-my-postgres ClusterIP 34.118.233.240 <none> 5432/TCP 22m
redis-my-redis ClusterIP 34.118.229.221 <none> 6379/TCP 3m19s
```

Finally, execute a sample ping via redis:

> kubectl exec -it redis-my-redis-75c88646fb-6lz8v -- redis-cli ping
```console
kubectl exec -it <redis-pod> -- redis-cli ping
```

You should see an output similar to:

```output
PONG
```


The Redis pod should be in **Running** state and the service should be **ClusterIP** type.

## What you've accomplished and what's next
Expand Down