Step-by-Step Guide: Running Kubernetes Applications in a VMware Environment

This documentation provides a comprehensive walkthrough for deploying and managing modern, containerized applications with Kubernetes on a VMware vSphere foundation. By leveraging familiar VMware tools and infrastructure, organizations can accelerate their adoption of Kubernetes while maintaining enterprise-grade stability, security, and performance. This guide covers architecture, deployment options, networking design, and practical examples using solutions like VMware Tanzu and vSphere.

1. Understanding the Architecture

Running Kubernetes on VMware combines the power of cloud-native orchestration with the robustness of enterprise virtualization. This hybrid approach allows you to leverage existing investments in hardware, skills, and operational processes.

VMware Environment: The Foundation

The core infrastructure is your vSphere platform, which provides the compute, storage, and networking resources for the Kubernetes nodes. Key components include:

  • ESXi Hosts: The hypervisors that run the virtual machines (VMs) for Kubernetes control plane and worker nodes.
  • vCenter Server: The centralized management plane for your ESXi hosts and VMs. It’s essential for deploying, managing, and monitoring the cluster’s underlying infrastructure.
  • vSphere Storage: Datastores (vSAN, VMFS, NFS) that provide persistent storage for VMs and, through the vSphere CSI driver, for Kubernetes applications.

Kubernetes Installation: A Spectrum of Choices

VMware offers a range of options for deploying Kubernetes, from deeply integrated, turn-key solutions to flexible, do-it-yourself methods.

  • VMware vSphere with Tanzu (VKS): This is the premier, integrated solution that embeds Kubernetes directly into vSphere. It transforms a vSphere cluster into a platform for running both VMs and containers side-by-side. It simplifies deployment and provides seamless access to vSphere resources.
  • VMware Tanzu Kubernetes Grid (TKG): A standalone, multi-cloud Kubernetes runtime that you can deploy on vSphere (and other clouds). TKG is ideal for organizations that need a consistent Kubernetes distribution across different environments.
  • Kubeadm on VMs: The generic, open-source approach. You create Linux VMs on vSphere and use standard Kubernetes tools like kubeadm to bootstrap a cluster. This offers maximum flexibility but requires more manual configuration and lifecycle management.

Networking: The Critical Connector

Proper network design is crucial for security and performance. VMware provides powerful constructs for Kubernetes networking:

  • VMware NSX: An advanced network virtualization and security platform. When integrated with Kubernetes, NSX provides a full networking and security stack, including pod networking, load balancing, and micro-segmentation for “zero-trust” security between microservices.
  • vSphere Distributed Switch (vDS): Can be used to create isolated networks (VLANs) for different traffic types—such as management, pod, and service traffic—providing a solid and performant networking base.

2. Prerequisites

Before deploying a cluster, ensure your VMware environment is prepared and has sufficient resources.

  • Configured vSphere/vCenter: A healthy vSphere 7.0U2 or newer environment with available ESXi hosts in a cluster.
  • Sufficient Resources: Plan for your desired cluster size. A small test cluster (1 control plane, 3 workers) may require at least 16 vCPUs, 64GB RAM, and 500GB of storage. Production clusters will require significantly more.
  • Networking Infrastructure:
    • (For vDS) Pre-configured port groups and VLANs for management, workload, and external access.
    • (For NSX) NSX Manager deployed and configured with network segments and T0/T1 gateways.
    • A pool of available IP addresses for all required networks.
  • Tooling (Optional but Recommended): VMware Tanzu CLI, Rancher, or other management tools to simplify cluster lifecycle operations.

3. Cluster Deployment: Step by Step

Option 1: VMware Tanzu Kubernetes Grid (TKG) Standalone

TKG provides a streamlined CLI or UI experience for creating conformant Kubernetes clusters.

# Install prerequisites: Docker, Tanzu CLI, kubectl
# Start the UI-based installer for a guided experience
tanzu standalone-cluster create --ui

# Alternatively, use a YAML configuration file for repeatable deployments
tanzu standalone-cluster create -f my-cluster-config.yaml

The wizard or YAML file allows you to specify the vCenter endpoint, the number of nodes, VM sizes (e.g., small, medium, large), and network settings.

Option 2: vSphere with Tanzu (VKS)

This method is fully integrated into the vSphere Client.

  1. In the vSphere Client, navigate to Workload Management.
  2. Enable it on a vSphere cluster, which deploys a Supervisor Cluster.
  3. Configure control plane node sizes and worker node pools via VM Classes.
  4. Assign network segments for Pod and Service IP ranges.
  5. Once enabled, developers can provision their own “Tanzu Kubernetes Clusters” on-demand.

Option 3: Kubeadm on VMs (DIY)

This is the most manual but also most transparent method.

  1. Prepare Linux VMs on vSphere (e.g., Ubuntu 20.04). Best practice is to create a template.
  2. Install a container runtime (Containerd), kubeadm, kubelet, and kubectl on all VMs.
  3. Initialize the master node:
# Replace with your chosen Pod network range
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
  1. Install a CNI (Container Network Interface) plugin like Calico or Antrea.
  2. Join worker nodes using the command provided by the kubeadm init output.

4. Networking Design Example

A segmented network topology is a best practice for security and manageability. NSX or vDS with VLANs enables this isolation.

Reference Network Topology

ComponentNetwork / VLANExample Address RangePurpose
vSphere Managementmgmt-vlan10.0.0.0/24Access to vCenter, ESXi management, and NSX Manager. Highly secured.
Kubernetes APIk8s-control-plane10.10.10.0/24For `kubectl` access and external automation tools to reach the cluster API.
Pod Network (Overlay)k8s-pods-vxlan192.168.0.0/16Internal, private network for all Pod-to-Pod communication. Managed by the CNI.
Service Networkk8s-svc-vlan10.20.20.0/24Virtual IP range for Kubernetes services. Traffic is not routable externally.
External LB / Ingressext-lb-vlan10.30.30.0/24Public-facing network where application IPs are exposed via LoadBalancers.

[External Users]      | [Firewall / Router]      |  [Load Balancer/Ingress VIPs (10.30.30.x)]      |   [K8s Service Network (10.20.20.x) – Internal]      |   [Pods: Overlay Network (192.168.x.x)]      | [Worker Node VMs: Management Network on vSphere]      | [vSphere Mgmt (vCenter, NSX, ESXi)]

5. Deploying an Application Example

Once the cluster is running, you can deploy applications using standard Kubernetes manifest files.

Sample Deployment YAML (nginx-deployment.yaml)

This manifest creates a Deployment that ensures three replicas of an Nginx web server are always running.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3 # Desired number of pods
  selector:
    matchLabels:
      app: nginx # Connects the Deployment to the pods
  template: # Pod template
    metadata:
      labels:
        app: nginx # Label applied to each pod
    spec:
      containers:
      - name: nginx
        image: nginx:latest # The container image to use
        ports:
        - containerPort: 80 # The port the application listens on

Apply the configuration to your cluster:

kubectl apply -f nginx-deployment.yaml

6. Exposing the Application via a Service

A Deployment runs your pods, but a Service exposes them to the network. For production, a LoadBalancer service is recommended.

Sample LoadBalancer Service (nginx-service.yaml)

When deployed in an integrated environment like Tanzu with NSX, this automatically provisions an external IP from your load balancer pool.

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer # Asks the cloud provider for a load balancer
  selector:
    app: nginx # Forwards traffic to pods with this label
  ports:
    - protocol: TCP
      port: 80 # The port the service will be exposed on
      targetPort: 80 # The port on the pod to send traffic to

Apply the service and find its external IP:

kubectl apply -f nginx-service.yaml
kubectl get service nginx-service
# The output will show an EXTERNAL-IP once provisioned

You can then access your application at http://<EXTERNAL-IP>.

7. Scaling, Monitoring, and Managing

  • Scaling: Easily adjust the number of replicas to handle changing loads.
kubectl scale deployment/nginx-deployment --replicas=5
  • Monitoring: Combine vSphere monitoring (for VM health) with in-cluster tools like Prometheus and Grafana (for application metrics). VMware vRealize Operations provides a holistic view from app to infrastructure.
  • Storage: Use the vSphere CSI driver to provide persistent storage. Developers request storage with a PersistentVolumeClaim (PVC), and vSphere automatically provisions a virtual disk on a datastore (vSAN, VMFS, etc.) to back it.

Best Practices & Further Reading

  • Use Resource Pools: In vSphere, use Resource Pools to guarantee CPU and memory for Kubernetes nodes, isolating them from other VM workloads.
  • Embrace NSX Security: Use NSX micro-segmentation to create firewall rules that control traffic between pods, enforcing a zero-trust security model.
  • Automate Everything: Leverage Terraform, Ansible, or PowerCLI to automate the deployment and configuration of your vSphere infrastructure and Kubernetes clusters.
  • Follow Validated Designs: For production, consult VMware’s official reference architectures to ensure a supportable and scalable deployment.

Useful References

Document Version 1.0 | A foundational framework for enterprise Kubernetes on VMware.

Retrieve all MAC addresses of NICs associated with ESXi hosts in a cluster

# Import VMware PowerCLI module
Import-Module VMware.PowerCLI

# Connect to vCenter
$vCenter = "vcenter.local"  # Replace with your vCenter server
$username = "administrator@vsphere.local"  # Replace with your vCenter username
$password = "yourpassword"  # Replace with your vCenter password

Connect-VIServer -Server $vCenter -User $username -Password $password

# Specify the cluster name
$clusterName = "ClusterName"  # Replace with the target cluster name

# Get all ESXi hosts in the specified cluster
$esxiHosts = Get-Cluster -Name $clusterName | Get-VMHost

# Loop through each ESXi host in the cluster
foreach ($host in $esxiHosts) {
    Write-Host "Processing ESXi Host: $($host.Name)" -ForegroundColor Cyan

    # Get all physical NICs (VMNICs) on the ESXi host
    $vmnics = Get-VMHostNetworkAdapter -VMHost $host | Where-Object { $_.NicType -eq "Physical" }

    # Get all VMkernel adapters on the ESXi host
    $vmkernelAdapters = Get-VMHostNetworkAdapter -VMHost $host | Where-Object { $_.NicType -eq "Vmkernel" }

    # Display VMNICs and their associated VMkernel adapters
    foreach ($vmnic in $vmnics) {
        $macAddress = $vmnic.Mac
        Write-Host "  VMNIC: $($vmnic.Name)" -ForegroundColor Green
        Write-Host "    MAC Address: $macAddress"

        # Check for associated VMkernel ports
        $associatedVmkernels = $vmkernelAdapters | Where-Object { $_.PortGroupName -eq $vmnic.PortGroupName }
        if ($associatedVmkernels) {
            foreach ($vmkernel in $associatedVmkernels) {
                Write-Host "    Associated VMkernel Adapter: $($vmkernel.Name)" -ForegroundColor Yellow
                Write-Host "      VMkernel IP: $($vmkernel.IPAddress)"
            }
        } else {
            Write-Host "    No associated VMkernel adapters." -ForegroundColor Red
        }
    }

    Write-Host ""  # Blank line for readability
}

# Disconnect from vCenter
Disconnect-VIServer -Confirm:$false




Sample Output :

Processing ESXi Host: esxi01.local
  VMNIC: vmnic0
    MAC Address: 00:50:56:11:22:33
    Associated VMkernel Adapter: vmk0
      VMkernel IP: 192.168.1.10

  VMNIC: vmnic1
    MAC Address: 00:50:56:44:55:66
    No associated VMkernel adapters.

Processing ESXi Host: esxi02.local
  VMNIC: vmnic0
    MAC Address: 00:50:56:77:88:99
    Associated VMkernel Adapter: vmk1
      VMkernel IP: 192.168.1.20

Exporting to a CSV (Optional)

If you want to save the results to a CSV file, modify the script as follows:

  1. Create a results array at the top:
$results = @()

Add results to the array inside the foreach loop:

$results += [PSCustomObject]@{
    HostName       = $host.Name
    VMNIC          = $vmnic.Name
    MACAddress     = $macAddress
    VMkernelAdapter = $vmkernel.Name
    VMkernelIP     = $vmkernel.IPAddress
}

Export the results at the end:

$results | Export-Csv -Path "C:\VMNIC_VMkernel_Report.csv" -NoTypeInformation