Deployment and service

In this section, you’ll add a new namespace, deployment, and service for nginx on Intel x86 architecture. The result will be a K8s cluster running nginx accessible through the Internet using a load balancer.

The deployment configuration uses three separate files to organize the different components:

  • A file called namespace.yaml that creates a dedicated nginx namespace to contain all your Kubernetes nginx objects.

  • A file called nginx-configmap.yaml that contains a shared ConfigMap called nginx-config with performance-optimized nginx settings that both Intel and Arm deployments will use.

  • A file called intel_nginx.yaml that creates two main Kubernetes objects:

    • A deployment called nginx-intel-deployment that pulls the multi-architecture nginx image from DockerHub and runs it on the Intel node
    • A service called nginx-intel-svc that acts as a load balancer for pods labeled with both app: nginx-multiarch and arch: intel. The deployment automatically mounts the shared ConfigMap as /etc/nginx/nginx.conf to apply the optimized configuration.

    Run the following commands to download and apply the namespace, ConfigMap, and Intel nginx deployment configuration:

    

        
        
curl -o namespace.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/namespace.yaml
kubectl apply -f namespace.yaml

curl -o nginx-configmap.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/nginx-configmap.yaml
kubectl apply -f nginx-configmap.yaml

curl -o intel_nginx.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/intel_nginx.yaml
kubectl apply -f intel_nginx.yaml

    

You will see output similar to:

    

        
        namespace/nginx created
configmap/nginx-config created
deployment.apps/nginx-intel-deployment created
service/nginx-intel-svc created

        
    

Examine the deployment configuration

Take a closer look at the intel_nginx.yaml deployment file. You’ll see some settings that ensure the deployment runs on the Intel x86 node.

Note

The amd64 architecture label represents x86_64 nodes, which can be either AMD or Intel processors. This Learning Path was tested on Intel x86 nodes.

The nodeSelector value is set to kubernetes.io/arch: amd64. This ensures that the deployment only runs on x86_64 nodes, utilizing the amd64 version of the nginx container image.

    

        
        
    spec:
      nodeSelector:
        kubernetes.io/arch: amd64

    

The sessionAffinity tag removes sticky connections to the target pods. This removes persistent connections to the same pod on each request.

    

        
        
spec:
  sessionAffinity: None

    

The service selector uses both app: nginx-multiarch and arch: intel labels to target only Intel pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing.

    

        
        
  selector:
    app: nginx-multiarch
    arch: intel

    

Because the final goal is to run nginx on multiple architectures, the deployment uses the standard nginx image from DockerHub. This image supports multiple architectures, including amd64 (Intel) and arm64 (Arm).

    

        
        
      containers:
      - image: nginx:latest
        name: nginx

    
Note

You can set nginx as your default namespace to avoid typing -nnginx in future commands:

    

        
        
kubectl config set-context --current --namespace=nginx

    

Verify the deployment is complete

It’s now time to verify everything is running as expected.

Confirm that the nodes, pods, and services are running:

    

        
        
kubectl get nodes,pods,svc -nnginx 

    

Your output should be similar to the following, showing two nodes, one pod, and one service:

    

        
        NAME                                 STATUS   ROLES    AGE   VERSION
node/aks-arm-56500727-vmss000000     Ready    <none>   50m   v1.32.7
node/aks-intel-31372303-vmss000000   Ready    <none>   55m   v1.32.7

NAME                                          READY   STATUS    RESTARTS   AGE
pod/nginx-intel-deployment-78bb8885fd-mw24f   1/1     Running   0          38s

NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
service/nginx-intel-svc   LoadBalancer   10.0.226.250   20.80.128.191   80:30080/TCP   39s

        
    

You can also verify that the ConfigMap was created:

    

        
        
kubectl get configmap -nnginx

    
    

        
        NAME               DATA   AGE
nginx-config       1      51s

        
    

With the pods in a Ready state and the service showing a valid External IP, you’re now ready to test the nginx Intel service.

Test the Intel service

Run the following to make an HTTP request to the Intel nginx service:

    

        
        
./nginx_util.sh curl intel

    

You can see the HTTP response along with details about which pod served the request:

    

        
        Using service endpoint 20.3.71.69 for curl on intel service
Response:
{
  "message": "nginx response",
  "timestamp": "2025-10-24T16:49:29+00:00",
  "server": "nginx-intel-deployment-758584d5c6-2nhnx",
  "request_uri": "/"
}
Served by: nginx-intel-deployment-758584d5c6-2nhnx

        
    

If you see similar output, you’ve successfully configured your AKS cluster with an Intel node, running an nginx` deployment and service with the nginx multi-architecture container image.

Back
Next