Introduction
Get started with Helm on Google Axion C4A (Arm-based)
Create a Google Axion C4A virtual machine on Google Cloud
Install Helm
Validate Helm workflows on a Google Axion C4A virtual machine
Prepare a GKE cluster for Helm deployments
PostgreSQL Deployment Using Custom Helm Chart
Deploy Redis on GKE
Deploy NGINX with public access
Benchmark Helm concurrency on a Google Axion C4A virtual machine
Next Steps
In this section, you’ll benchmark Helm CLI concurrency on your Arm64-based GCP SUSE VM. Since Helm doesn’t provide built-in performance metrics, you’ll measure concurrency behavior by running multiple Helm commands in parallel and recording total execution time.
Ensure the local Kubernetes cluster is running and has sufficient resources to deploy multiple NGINX replicas.
Verify Helm and Kubernetes access:
helm version
kubectl get nodes
All nodes should be in Ready state.
Configure Helm to download charts from the Bitnami repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
Isolate benchmark workloads from other cluster resources.
kubectl create namespace helm-bench
Prepare the cluster by pulling container images:
helm install warmup bitnami/nginx \
-n helm-bench \
--set service.type=ClusterIP \
--timeout 10m
The first install is usually slower because images must be downloaded and Kubernetes needs to initialize internal objects. This warm-up run reduces image-pull and initialization overhead so the benchmark focuses more on Helm CLI concurrency and Kubernetes API behavior.
You should see output (near the top of the output) that is similar to:
NAME: warmup
LAST DEPLOYED: Tue Dec 9 21:10:44 2025
NAMESPACE: helm-bench
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 22.3.3
APP VERSION: 1.29.3
After validation, remove the warm-up deployment:
helm uninstall warmup -n helm-bench
Helm does not provide native concurrency or throughput metrics. Concurrency benchmarking is performed by executing multiple Helm CLI operations in parallel and measuring overall completion time.
Run multiple Helm installs in parallel:
time (
for i in {1..5}; do
helm install nginx-$i bitnami/nginx \
-n helm-bench \
--set service.type=ClusterIP \
--timeout 10m &
done
wait
)
This measures Helm concurrency handling, Kubernetes API responsiveness, and client-side execution on Arm64.
You should see an output similar to:
real 0m3.998s
user 0m12.798s
sys 0m0.339s
Confirm that all components were installed successfully:
helm list -n helm-bench
kubectl get pods -n helm-bench
All releases should be in deployed state and pods should be in Running status.
Run a benchmark that includes workload readiness time:
time (
for i in {1..3}; do
helm install nginx-wait-$i bitnami/nginx \
-n helm-bench \
--set service.type=ClusterIP \
--wait \
--timeout 15m &
done
wait
)
Measure Helm concurrency combined with scheduler and image-pull contention to understand end-to-end readiness impact.
The output is similar to:
real 0m12.924s
user 0m7.333s
sys 0m0.312s
Record the following:
Results from the earlier run on the c4a-standard-4 (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE):
| Test Case | Parallel Installs | --wait Used | Timeout | Total Time (real) |
|---|---|---|---|---|
| Parallel Install (No Wait) | 5 | No | 10m | 3.99 s |
| Parallel Install (With Wait) | 3 | Yes | 15m | 12.92 s |
Key observations:
--wait flag significantly increases total execution time because Helm waits for workloads to reach Ready state, reflecting scheduler and image-pull delays rather than Helm CLI overheadYou have successfully benchmarked Helm concurrency on a Google Axion C4A Arm64 VM:
--wait flag extends deployment time to reflect actual workload initializationThese results establish a performance baseline for deploying containerized workloads with Helm on Arm64-based cloud infrastructure.