Skip to content
Merged

Helm2 #2802

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
---
title: Install and validate Helm on Google Cloud C4A Arm-based VMs

draft: true
cascade:
draft: true

minutes_to_complete: 60

who_is_this_for: This is an introductory topic intended for developers who want to get hands-on experience using Helm on Linux Arm64 systems, specifically Google Cloud C4A virtual machines powered by Axion processors.
Expand Down Expand Up @@ -75,3 +71,9 @@ weight: 1
layout: "learningpathall"
learning_path_main_page: "yes"
---

Helm is the package manager for Kubernetes, simplifying application deployment and lifecycle management. Google Axion C4A instances, powered by Arm Neoverse-V2 processors, provide an efficient platform for running Kubernetes workloads.

In this Learning Path, you learn how to install and configure Helm on a Google Cloud C4A virtual machine, create Kubernetes clusters, and deploy applications using both official and custom Helm charts. You validate Helm's core functionality and explore deployment patterns for PostgreSQL, Redis, and NGINX on Arm-based infrastructure.

By the end of this Learning Path, you'll have practical experience with Helm on Arm64 systems and understand how to deploy cloud-native applications on Google's Axion processors.
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,18 @@ layout: "learningpathall"

## Explore Google Axion C4A instances in Google Cloud

Google Axion C4A is a family of Arm-based virtual machines built on Googles custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed to deliver high performance with improved energy efficiency, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications.
Google Axion C4A is a family of Arm-based VMs built on Google's custom Axion processors, which use Arm Neoverse-V2 cores. These VMs deliver high performance with improved energy efficiency for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications.

The C4A series provides an Arm-based alternative to x86 virtual machines, enabling developers to evaluate cost, performance, and efficiency trade-offs in Google Cloud. For Kubernetes users, Axion C4A instances provide a practical way to run Arm-native clusters and validate tooling such as Helm on modern cloud infrastructure.
The C4A series provides an Arm-based alternative to x86 VMs, enabling developers to evaluate cost, performance, and efficiency trade-offs in Google Cloud. For Kubernetes users, C4A instances provide a practical way to run Arm-native clusters and validate tooling such as Helm on modern cloud infrastructure.

To learn more about Google Axion, see the Google blog [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu).

## Explore Helm

Helm is the package manager for Kubernetes. It simplifies application deployment, upgrades, rollbacks, and lifecycle management by packaging Kubernetes resources into reusable charts.

Helm runs as a lightweight CLI that interacts directly with the Kubernetes API. Because it is architecture-agnostic, it works consistently across x86 and Arm64 clusters, including those running on Google Axion C4A instances.
As a lightweight CLI, Helm interacts directly with the Kubernetes API. Its architecture-agnostic design ensures consistent behavior across x86 and Arm64 clusters, including those running on Google Axion C4A instances.

In this Learning Path, you use Helm to deploy and manage applications on an Arm-based Kubernetes environment and verify common workflows such as install, upgrade, and uninstall operations.
In this Learning Path you'll use Helm to deploy and manage applications on an Arm-based Kubernetes environment, verifying common workflows such as install, upgrade, and uninstall operations.

For more information, see the [Helm website](https://helm.sh/).
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ layout: learningpathall
This section walks you through baseline testing to confirm that Helm works correctly on an Arm64-based Kubernetes cluster by validating core workflows such as install, upgrade, and uninstall.

## Add Helm repository

Add the Bitnami Helm chart repository and update the local index:

```console
Expand All @@ -26,12 +27,12 @@ Update Complete. ⎈Happy Helming!⎈
```

## Install a sample application
Install a sample NGINX application using a Helm chart:

Install a sample NGINX application to validate that Helm can create releases:

```console
helm install nginx bitnami/nginx
```
Deploy a simple test app to validate that Helm can create releases on the cluster.

The output is similar to:
```output
Expand All @@ -49,7 +50,8 @@ APP VERSION: 1.29.4


## Validate deployment
Verify that the Helm release is created:

Verify that Helm created the release:

```console
helm list
Expand Down Expand Up @@ -78,14 +80,18 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m33s
nginx LoadBalancer 10.96.88.148 <pending> 80:30166/TCP,443:32128/TCP 117s
```
All pods should be in the **Running** state. If pods are in **Pending** state, wait 30 to 60 seconds for container images to download, then retry the commands above.

All pods should be in **Running** state. If pods show **Pending**, wait 30 to 60 seconds for container images to download and retry.


## Validate Helm lifecycle

Confirm that Helm supports the full application lifecycle on Arm64.

### Upgrade the release

Update the existing release to a new revision:

```console
helm upgrade nginx bitnami/nginx
```
Expand All @@ -97,7 +103,8 @@ Release "nginx" has been upgraded. Happy Helming!
```

### Uninstall the release
Ensure Helm can cleanly remove the release and associated resources.

Remove the release and associated resources:

```console
helm uninstall nginx
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,17 @@ weight: 10
### FIXED, DO NOT MODIFY
layout: learningpathall
---
## Overview
## Run concurrent Helm benchmarks

This section explains how to benchmark Helm CLI concurrency on an Arm64-based GCP SUSE virtual machine.

Since Helm does not provide built-in performance metrics, concurrency behavior is measured by running multiple Helm commands in parallel and recording the total execution time.
In this section, you'll benchmark Helm CLI concurrency on your Arm64-based GCP SUSE VM. Since Helm doesn't provide built-in performance metrics, you'll measure concurrency behavior by running multiple Helm commands in parallel and recording total execution time.

### Prerequisites

{{% notice Note %}}
Ensure the local Kubernetes cluster created earlier is running and has sufficient resources to deploy multiple NGINX replicas.
Ensure the local Kubernetes cluster is running and has sufficient resources to deploy multiple NGINX replicas.
{{% /notice %}}

Before starting the benchmark, ensure Helm is installed and the Kubernetes cluster is accessible.
Verify Helm and Kubernetes access:

```console
helm version
Expand All @@ -26,7 +24,8 @@ kubectl get nodes
All nodes should be in `Ready` state.

### Add a Helm repository
Helm installs applications using "charts." Configure Helm to download charts from the Bitnami repository and update the local chart index.

Configure Helm to download charts from the Bitnami repository:

```console
helm repo add bitnami https://charts.bitnami.com/bitnami
Expand All @@ -41,7 +40,8 @@ kubectl create namespace helm-bench
```

### Warm-up run (recommended)
Prepare the cluster by pulling container images and initializing caches.

Prepare the cluster by pulling container images:

```console
helm install warmup bitnami/nginx \
Expand Down Expand Up @@ -75,7 +75,8 @@ helm uninstall warmup -n helm-bench
Helm does not provide native concurrency or throughput metrics. Concurrency benchmarking is performed by executing multiple Helm CLI operations in parallel and measuring overall completion time.
{{% /notice %}}
### Concurrent Helm install benchmark (no wait)
Run multiple Helm installs in parallel using background jobs.

Run multiple Helm installs in parallel:

```console
time (
Expand All @@ -88,10 +89,8 @@ done
wait
)
```
This step simulates multiple teams deploying applications at the same time.
Helm submits all requests without waiting for pods to fully start.

This measures Helm concurrency handling, Kubernetes API responsiveness, and Helm CLI client-side execution behavior on Arm64.
This measures Helm concurrency handling, Kubernetes API responsiveness, and client-side execution on Arm64.

You should see an output similar to:
```output
Expand All @@ -102,7 +101,7 @@ sys 0m0.339s

### Verify deployments

Confirm that Helm reports all components were installed successfully and that Kubernetes created and started the applications:
Confirm that all components were installed successfully:

```console
helm list -n helm-bench
Expand All @@ -112,7 +111,8 @@ kubectl get pods -n helm-bench
All releases should be in `deployed` state and pods should be in `Running` status.

### Concurrent Helm install benchmark (with --wait)
Run a benchmark that includes workload readiness time.

Run a benchmark that includes workload readiness time:

```console
time (
Expand All @@ -138,7 +138,12 @@ sys 0m0.312s

### Metrics to record

Record the following metrics: total elapsed time (overall time taken to complete all installs), number of parallel installs, any failures or Kubernetes API errors, and pod readiness delay (time pods take to become Ready under resource pressure).
Record the following:

- Total elapsed time (overall time taken to complete all installs)
- Number of parallel installs
- Any failures or Kubernetes API errors
- Pod readiness delay (time pods take to become Ready under resource pressure)

### Benchmark summary
Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE):
Expand All @@ -149,19 +154,20 @@ Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm6
| Parallel Install (With Wait) | 3 | Yes | 15m | **12.92 s** |

Key observations:
- In this configuration, Helm CLI operations complete efficiently on an Arm64-based Axion C4A virtual machine, establishing a baseline for further testing.
- The --wait flag significantly increases total execution time because Helm waits for workloads to reach a Ready state, reflecting scheduler and image-pull delays rather than Helm CLI overhead.
- For this baseline test, parallel Helm installs complete with minimal contention, indicating that client-side execution and Kubernetes API handling are not bottlenecks at this scale.
- End-to-end workload readiness dominates total deployment time, showing that cluster resource availability and container image pulls have a greater impact than Helm CLI execution.

- Helm CLI operations complete efficiently on an Arm64-based Axion C4A VM, establishing a baseline for further testing
- The `--wait` flag significantly increases total execution time because Helm waits for workloads to reach Ready state, reflecting scheduler and image-pull delays rather than Helm CLI overhead
- Parallel Helm installs complete with minimal contention, indicating that client-side execution and Kubernetes API handling aren't bottlenecks at this scale
- End-to-end workload readiness dominates total deployment time, showing that cluster resource availability and container image pulls have greater impact than Helm CLI execution

## What you've accomplished

You have successfully benchmarked Helm concurrency on a Google Axion C4A Arm64 virtual machine. The benchmarks demonstrated that:
You have successfully benchmarked Helm concurrency on a Google Axion C4A Arm64 VM:

- Helm CLI operations execute efficiently on Arm64 architecture with the Axion processor
- Helm CLI operations execute efficiently on Arm64 architecture with Axion processors
- Parallel Helm installs complete in under 4 seconds when not waiting for pod readiness
- Using the `--wait` flag extends deployment time to reflect actual workload initialization
- Kubernetes API and client-side performance scale well under concurrent load
- Image pulling and resource scheduling have more impact on total deployment time than Helm CLI execution

These results establish a performance baseline for deploying containerized workloads with Helm on Arm64-based cloud infrastructure, helping you make informed decisions about deployment strategies and resource allocation.
These results establish a performance baseline for deploying containerized workloads with Helm on Arm64-based cloud infrastructure.
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
---
title: Prepare GKE Cluster for Helm Deployments
title: Prepare a GKE cluster for Helm deployments
weight: 6

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Overview
This section explains how to prepare a **Google Kubernetes Engine (GKE) cluster** for deploying Helm charts.
The prepared GKE cluster is used to deploy the following services using custom Helm charts:
## Set up your GKE environment

In this section you'll prepare a Google Kubernetes Engine (GKE) cluster for deploying Helm charts. The GKE cluster hosts the following services:

- PostgreSQL
- Redis
- NGINX

This setup differs from the earlier KinD-based local cluster, which was intended only for local validation.
This setup differs from the earlier KinD-based local cluster, which was used only for local validation.

## Prerequisites

Before starting, ensure that Docker, kubectl, and Helm are installed, and that you have a Google Cloud account available. If Helm and kubectl aren't installed, complete the **Install Helm** section first.
Ensure that Docker, kubectl, and Helm are installed, and that you have a Google Cloud account available. If Helm and kubectl aren't installed, complete the previous section first.

### Verify kubectl installation

### Verify kubectl Installation
Confirm that kubectl is available:

```console
Expand All @@ -32,46 +33,50 @@ Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
```

### Install Python 3.11
### Install Python 3.11

Install python3.11:
Install Python 3.11:

```bash
sudo zypper install -y python311
which python3.11
```

### Install Google Cloud SDK (gcloud)

The Google Cloud SDK is required to create and manage GKE clusters.

**Download and extract:**
Download and extract:

```console
wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-460.0.0-linux-arm.tar.gz
tar -xvf google-cloud-sdk-460.0.0-linux-arm.tar.gz
```

**Install gcloud:**
Install gcloud:

```console
./google-cloud-sdk/install.sh
```

The shell will exit. Bring up a new SSH Shell:
After installation completes, exit and reconnect to apply the PATH changes:

```console
exit
```

### Initialize gcloud

Authenticate and configure the Google Cloud CLI:

```console
gcloud init
```

During initialization, select **Login with a new account**. You'll be prompted to use your browser to authenticate to Google and receive an auth code to copy back. Select the project you want to use and choose default settings when unsure.
During initialization, select **Login with a new account**. You'll be prompted to authenticate using your browser and receive an auth code to copy back. Select the project you want to use and choose default settings when unsure.

### Get the list of Google project IDs

Retrieve the list of project IDs:

```console
Expand All @@ -85,28 +90,35 @@ PROJECT_ID NAME PROJECT_NUMBER
arm-lp-test arm-lp-test 834184475014
```

Note the **PROJECT_ID** for the project you want to set as active for use in the next step.
### Set the Active Project
Note the **PROJECT_ID** for use in the next step.

### Set the active project

Ensure the correct GCP project is selected:

```console
gcloud config set project YOUR_PROJECT_ID
gcloud config set project <YOUR_PROJECT_ID>
```

Replace `<YOUR_PROJECT_ID>` with your actual project ID from the previous step.

### Install the auth plugin for gcloud

```console
gcloud components install gke-gcloud-auth-plugin
```

### Enable Kubernetes API

Enable the required API for GKE:

```console
gcloud services enable container.googleapis.com
```

### Create a GKE Cluster
Create a Kubernetes cluster to host Helm deployments. Replace `YOUR_PROJECT_ID` with the project ID you set previously.
### Create a GKE cluster

Create a Kubernetes cluster to host Helm deployments:

```console
gcloud container clusters create helm-arm64-cluster \
Expand All @@ -116,17 +128,17 @@ gcloud container clusters create helm-arm64-cluster \
--no-enable-ip-alias
```

This creates a standard GKE cluster. You can adjust the node count and machine type later as needed.
### Configure kubectl access to GKE

### Configure kubectl Access to GKE
Fetch cluster credentials:

```console
gcloud container clusters get-credentials helm-arm64-cluster \
--zone us-central1-a
```

### Verify Cluster Access
### Verify cluster access

Confirm Kubernetes access:

```console
Expand Down
Loading