VMware Storage Performance Testing Suite

ESXi hosts provide fioioping, and esxtop for storage benchmarking directly from CLI, while vCenter PowerCLI aggregates performance across clusters/datastores. This script suite generates IOPS, latency, and throughput charts viewable in Confluence/HTML dashboards.

Core Testing Engine (fio-perf-test.sh)

Purpose: Run standardized fio workloads (4K random, 64K seq) on VMFS/NFS datastores.

bash#!/bin/bash
# fio-perf-test.sh - Run on ESXi via SSH
DATASTORE="/vmfs/volumes/$(esxcli storage filesystem list | grep -v Mounted | head -1 | awk '{print $1}')"
TEST_DIR="$DATASTORE/perf-test"
FIO_TEST="/usr/lib/vmware/fio/fio"

mkdir -p $TEST_DIR
cd $TEST_DIR

cat > fio-random-4k.yaml << EOF
[global]
ioengine=libaio
direct=1
size=1G
time_based
runtime=60
group_reporting
directory=$TEST_DIR

[rand-read]
rw=randread
bs=4k
numjobs=4
iodepth=32
filename=testfile.dat

[rand-write]
rw=randwrite
bs=4k
numjobs=4
iodepth=32
filename=testfile.dat
EOF

# Run tests
$FIO_TEST fio-random-4k.yaml > /tmp/fio-4k-results.txt
$FIO_TEST --name=seq-read --rw=read --bs=64k --size=4G --runtime=60 --direct=1 --numjobs=1 --iodepth=32 $TEST_DIR/testfile.dat >> /tmp/fio-seq-results.txt

# Cleanup
rm -rf $TEST_DIR/*
echo "$(hostname),$(date),$(grep read /tmp/fio-4k-results.txt | tail -1 | awk '{print $3}'),$(grep IOPS /tmp/fio-4k-results.txt | grep read | awk '{print $2}')" >> /tmp/storage-perf.csv

Cron Schedule0 2 * * 1 /scripts/fio-perf-test.sh (weekly baseline).

vCenter PowerCLI Aggregator (StoragePerf.ps1)

Purpose: Collects historical perf + runs live esxtop captures across all hosts.

powershell# StoragePerf.ps1 - vCenter Storage Performance Dashboard
Connect-VIServer vcenter.example.com

$Report = @()
$Clusters = Get-Cluster

foreach ($Cluster in $Clusters) {
    $Hosts = Get-VMHost -Location $Cluster
    foreach ($Host in $Hosts) {
        # Live esxtop data (requires esxtop installed)
        $Esxtop = Invoke-VMScript -VM $Host -ScriptText {
            esxtop -b -a -d 30 | grep -E 'DAVG|%LAT|IOPS' | tail -20
        } -GuestCredential (Get-Credential)

        # Historical datastore stats
        $Datastores = Get-Datastore -VMHost $Host
        foreach ($DS in $Datastores) {
            $Perf = $DS | Get-Stat -Stat "datastore.read.average","datastore.write.average" -MaxSamples 24 -Interval Min | 
                    Select @{N='Time';E={$_.Timestamp}}, @{N='ReadKBps';E={[math]::Round($_.Value,2)}}, @{N='WriteKBps';E={[math]::Round($_.Value,2)}}
            
            $Report += [PSCustomObject]@{
                Host = $Host.Name
                Datastore = $DS.Name
                FreeGB = [math]::Round($DS.FreeSpaceGB,1)
                ReadAvgKBps = ($Perf.ReadKBps | Measure -Average).Average
                WriteAvgKBps = ($Perf.WriteKBps | Measure -Average).Average
                EsxtopLatency = ($Esxtop | Select-String "DAVG" | Select-Object -Last 1).ToString().Split()[2]
            }
        }
    }
}

# Export CSV for charts
$Report | Export-Csv "StoragePerf-$(Get-Date -f yyyy-MM-dd).csv" -NoTypeInformation

# Generate HTML dashboard
$Report | ConvertTo-Html -Property Host,Datastore,FreeGB,ReadAvgKBps,WriteKBps,EsxtopLatency -Title "Storage Performance" | 
    Out-File "storage-dashboard.html"

Performance Chart Generator (perf-charts.py)

Purpose: Converts CSV data to interactive Plotly charts for Confluence.

python#!/usr/bin/env python3
# perf-charts.py - Generate HTML charts from CSV
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import sys

df = pd.read_csv(sys.argv[1])

# IOPS vs Latency scatter
fig1 = px.scatter(df, x='ReadAvgKBps', y='EsxtopLatency', 
                 size='FreeGB', color='Host', hover_name='Datastore',
                 title='Storage Read Performance vs Latency',
                 labels={'ReadAvgKBps':'Read KBps', 'EsxtopLatency':'Avg Latency (ms)'})

# Throughput bar chart
fig2 = px.bar(df, x='Datastore', y=['ReadAvgKBps','WriteAvgKBps'], 
              barmode='group', title='Read/Write Throughput by Datastore')

# Combined dashboard
fig = make_subplots(rows=2, cols=1, subplot_titles=('IOPS vs Latency', 'Read/Write Throughput'))
fig.add_trace(fig1.data[0], row=1, col=1)
fig.add_trace(fig2.data[0], row=2, col=1)
fig.add_trace(fig2.data[1], row=2, col=1)

fig.write_html('storage-perf-dashboard.html')
print("Charts saved: storage-perf-dashboard.html")

Usagepython3 perf-charts.py StoragePerf-2025-12-28.csv

Master Orchestrator (storage-benchmark.py)

Purpose: Runs fio tests on all ESXi hosts + generates dashboard.

python#!/usr/bin/env python3
import paramiko
import subprocess
import pandas as pd
from datetime import datetime

ESXI_HOSTS = ['esxi1.example.com', 'esxi2.example.com']
VCENTER = 'vcenter.example.com'

def run_fio(host):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username='root', password='your-esxi-password')

# Copy & run fio script
stdin, stdout, stderr = ssh.exec_command('wget -O /tmp/fio-test.sh https://your-confluence/scripts/fio-perf-test.sh && chmod +x /tmp/fio-test.sh && /tmp/fio-test.sh')
result = stdout.read().decode()
ssh.close()
return result

# Execute tests
perf_data = []
for host in ESXI_HOSTS:
print(f"Testing {host}...")
run_fio(host)
perf_data.append({'Host': host, 'TestTime': datetime.now()})

# Pull PowerCLI report
subprocess.run(['pwsh', '-File', 'StoragePerf.ps1'])

# Generate charts
subprocess.run(['python3', 'perf-charts.py', f'StoragePerf-{datetime.now().strftime("%Y-%m-%d")}.csv'])

print("Storage benchmark complete. View storage-perf-dashboard.html")

Confluence Chart Embedding

HTML Macro (paste storage-perf-dashboard.html content):

text{html}

{html}

CSV Table with Inline Charts:

text||Host||Datastore||Read IOPS||Latency||Chart||
|esxi1|datastore1|2450|2.3ms|![Read IOPS|width=150px,height=100px](storage-esxi1.png)|

Automated Dashboard Cronjob

bash#!/bin/bash
# /etc/cron.d/storage-perf
# Daily 3AM: Test + upload to Confluence
0 3 * * * root /usr/local/bin/storage-benchmark.py >> /var/log/storage-perf.log 2>&1

Output Files:

  • /tmp/storage-perf.csv → Historical trends
  • storage-perf-dashboard.html → Interactive Plotly charts
  • /var/log/storage-perf.log → Audit trail

Sample Output Charts

Expected Results (Tintri VMstore baseline):

textDatastore: tintri-vmfs-01
4K Random Read: 12,500 IOPS @ 1.8ms
4K Random Write: 8,200 IOPS @ 2.4ms
64K Seq Read: 450 MB/s
64K Seq Write: 380 MB/s

Pro Tips & Alerts

text☐ Alert if Latency > 5ms: Add to PowerCLI `if($EsxtopLatency -gt 5) {Send-MailMessage}`
☐ Tintri-specific: Add `esxtop` filter for Tintri LUN paths
☐ NFS tuning: Test with `nfs.maxqueuesize=8` parameter
☐ Compare baselines: Git commit CSV files weekly

Run Firstpython3 storage-benchmark.py --dry-run to validate hosts/configs.

Step-by-Step Guide: Running Kubernetes Applications in a VMware Environment

This documentation provides a comprehensive walkthrough for deploying and managing modern, containerized applications with Kubernetes on a VMware vSphere foundation. By leveraging familiar VMware tools and infrastructure, organizations can accelerate their adoption of Kubernetes while maintaining enterprise-grade stability, security, and performance. This guide covers architecture, deployment options, networking design, and practical examples using solutions like VMware Tanzu and vSphere.

1. Understanding the Architecture

Running Kubernetes on VMware combines the power of cloud-native orchestration with the robustness of enterprise virtualization. This hybrid approach allows you to leverage existing investments in hardware, skills, and operational processes.

VMware Environment: The Foundation

The core infrastructure is your vSphere platform, which provides the compute, storage, and networking resources for the Kubernetes nodes. Key components include:

  • ESXi Hosts: The hypervisors that run the virtual machines (VMs) for Kubernetes control plane and worker nodes.
  • vCenter Server: The centralized management plane for your ESXi hosts and VMs. It’s essential for deploying, managing, and monitoring the cluster’s underlying infrastructure.
  • vSphere Storage: Datastores (vSAN, VMFS, NFS) that provide persistent storage for VMs and, through the vSphere CSI driver, for Kubernetes applications.

Kubernetes Installation: A Spectrum of Choices

VMware offers a range of options for deploying Kubernetes, from deeply integrated, turn-key solutions to flexible, do-it-yourself methods.

  • VMware vSphere with Tanzu (VKS): This is the premier, integrated solution that embeds Kubernetes directly into vSphere. It transforms a vSphere cluster into a platform for running both VMs and containers side-by-side. It simplifies deployment and provides seamless access to vSphere resources.
  • VMware Tanzu Kubernetes Grid (TKG): A standalone, multi-cloud Kubernetes runtime that you can deploy on vSphere (and other clouds). TKG is ideal for organizations that need a consistent Kubernetes distribution across different environments.
  • Kubeadm on VMs: The generic, open-source approach. You create Linux VMs on vSphere and use standard Kubernetes tools like kubeadm to bootstrap a cluster. This offers maximum flexibility but requires more manual configuration and lifecycle management.

Networking: The Critical Connector

Proper network design is crucial for security and performance. VMware provides powerful constructs for Kubernetes networking:

  • VMware NSX: An advanced network virtualization and security platform. When integrated with Kubernetes, NSX provides a full networking and security stack, including pod networking, load balancing, and micro-segmentation for “zero-trust” security between microservices.
  • vSphere Distributed Switch (vDS): Can be used to create isolated networks (VLANs) for different traffic types—such as management, pod, and service traffic—providing a solid and performant networking base.

2. Prerequisites

Before deploying a cluster, ensure your VMware environment is prepared and has sufficient resources.

  • Configured vSphere/vCenter: A healthy vSphere 7.0U2 or newer environment with available ESXi hosts in a cluster.
  • Sufficient Resources: Plan for your desired cluster size. A small test cluster (1 control plane, 3 workers) may require at least 16 vCPUs, 64GB RAM, and 500GB of storage. Production clusters will require significantly more.
  • Networking Infrastructure:
    • (For vDS) Pre-configured port groups and VLANs for management, workload, and external access.
    • (For NSX) NSX Manager deployed and configured with network segments and T0/T1 gateways.
    • A pool of available IP addresses for all required networks.
  • Tooling (Optional but Recommended): VMware Tanzu CLI, Rancher, or other management tools to simplify cluster lifecycle operations.

3. Cluster Deployment: Step by Step

Option 1: VMware Tanzu Kubernetes Grid (TKG) Standalone

TKG provides a streamlined CLI or UI experience for creating conformant Kubernetes clusters.

# Install prerequisites: Docker, Tanzu CLI, kubectl
# Start the UI-based installer for a guided experience
tanzu standalone-cluster create --ui

# Alternatively, use a YAML configuration file for repeatable deployments
tanzu standalone-cluster create -f my-cluster-config.yaml

The wizard or YAML file allows you to specify the vCenter endpoint, the number of nodes, VM sizes (e.g., small, medium, large), and network settings.

Option 2: vSphere with Tanzu (VKS)

This method is fully integrated into the vSphere Client.

  1. In the vSphere Client, navigate to Workload Management.
  2. Enable it on a vSphere cluster, which deploys a Supervisor Cluster.
  3. Configure control plane node sizes and worker node pools via VM Classes.
  4. Assign network segments for Pod and Service IP ranges.
  5. Once enabled, developers can provision their own “Tanzu Kubernetes Clusters” on-demand.

Option 3: Kubeadm on VMs (DIY)

This is the most manual but also most transparent method.

  1. Prepare Linux VMs on vSphere (e.g., Ubuntu 20.04). Best practice is to create a template.
  2. Install a container runtime (Containerd), kubeadm, kubelet, and kubectl on all VMs.
  3. Initialize the master node:
# Replace with your chosen Pod network range
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
  1. Install a CNI (Container Network Interface) plugin like Calico or Antrea.
  2. Join worker nodes using the command provided by the kubeadm init output.

4. Networking Design Example

A segmented network topology is a best practice for security and manageability. NSX or vDS with VLANs enables this isolation.

Reference Network Topology

ComponentNetwork / VLANExample Address RangePurpose
vSphere Managementmgmt-vlan10.0.0.0/24Access to vCenter, ESXi management, and NSX Manager. Highly secured.
Kubernetes APIk8s-control-plane10.10.10.0/24For `kubectl` access and external automation tools to reach the cluster API.
Pod Network (Overlay)k8s-pods-vxlan192.168.0.0/16Internal, private network for all Pod-to-Pod communication. Managed by the CNI.
Service Networkk8s-svc-vlan10.20.20.0/24Virtual IP range for Kubernetes services. Traffic is not routable externally.
External LB / Ingressext-lb-vlan10.30.30.0/24Public-facing network where application IPs are exposed via LoadBalancers.

[External Users]      | [Firewall / Router]      |  [Load Balancer/Ingress VIPs (10.30.30.x)]      |   [K8s Service Network (10.20.20.x) – Internal]      |   [Pods: Overlay Network (192.168.x.x)]      | [Worker Node VMs: Management Network on vSphere]      | [vSphere Mgmt (vCenter, NSX, ESXi)]

5. Deploying an Application Example

Once the cluster is running, you can deploy applications using standard Kubernetes manifest files.

Sample Deployment YAML (nginx-deployment.yaml)

This manifest creates a Deployment that ensures three replicas of an Nginx web server are always running.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3 # Desired number of pods
  selector:
    matchLabels:
      app: nginx # Connects the Deployment to the pods
  template: # Pod template
    metadata:
      labels:
        app: nginx # Label applied to each pod
    spec:
      containers:
      - name: nginx
        image: nginx:latest # The container image to use
        ports:
        - containerPort: 80 # The port the application listens on

Apply the configuration to your cluster:

kubectl apply -f nginx-deployment.yaml

6. Exposing the Application via a Service

A Deployment runs your pods, but a Service exposes them to the network. For production, a LoadBalancer service is recommended.

Sample LoadBalancer Service (nginx-service.yaml)

When deployed in an integrated environment like Tanzu with NSX, this automatically provisions an external IP from your load balancer pool.

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer # Asks the cloud provider for a load balancer
  selector:
    app: nginx # Forwards traffic to pods with this label
  ports:
    - protocol: TCP
      port: 80 # The port the service will be exposed on
      targetPort: 80 # The port on the pod to send traffic to

Apply the service and find its external IP:

kubectl apply -f nginx-service.yaml
kubectl get service nginx-service
# The output will show an EXTERNAL-IP once provisioned

You can then access your application at http://<EXTERNAL-IP>.

7. Scaling, Monitoring, and Managing

  • Scaling: Easily adjust the number of replicas to handle changing loads.
kubectl scale deployment/nginx-deployment --replicas=5
  • Monitoring: Combine vSphere monitoring (for VM health) with in-cluster tools like Prometheus and Grafana (for application metrics). VMware vRealize Operations provides a holistic view from app to infrastructure.
  • Storage: Use the vSphere CSI driver to provide persistent storage. Developers request storage with a PersistentVolumeClaim (PVC), and vSphere automatically provisions a virtual disk on a datastore (vSAN, VMFS, etc.) to back it.

Best Practices & Further Reading

  • Use Resource Pools: In vSphere, use Resource Pools to guarantee CPU and memory for Kubernetes nodes, isolating them from other VM workloads.
  • Embrace NSX Security: Use NSX micro-segmentation to create firewall rules that control traffic between pods, enforcing a zero-trust security model.
  • Automate Everything: Leverage Terraform, Ansible, or PowerCLI to automate the deployment and configuration of your vSphere infrastructure and Kubernetes clusters.
  • Follow Validated Designs: For production, consult VMware’s official reference architectures to ensure a supportable and scalable deployment.

Useful References

Document Version 1.0 | A foundational framework for enterprise Kubernetes on VMware.

Proxmox and VMware in NFS Environments & Performance Testing

Network File System (NFS) is a distributed file system protocol allowing a user on a client computer to access files over a computer network much like local storage is accessed. Both Proxmox VE and VMware vSphere, leading virtualization platforms, can leverage NFS for flexible and scalable storage solutions. This document outlines key features and use cases for Proxmox and VMware in NFS environments, and details how to approach NFS performance testing.

Proxmox VE with NFS

Proxmox Virtual Environment (VE) is an open-source server virtualization management platform. It integrates KVM hypervisor and LXC containers, software-defined storage, and networking functionality on a single platform.

Key Features of Proxmox VE

  • Open-source: No licensing fees, extensive community support.
  • Integrated KVM and LXC: Supports both full virtualization (virtual machines) and lightweight containerization.
  • Web-based management interface: Provides a centralized control panel for all management tasks.
  • Clustering and High Availability (HA): Allows for the creation of resilient infrastructure by grouping multiple Proxmox VE servers.
  • Live migration: Enables moving running virtual machines between physical hosts in a cluster without downtime.
  • Built-in backup and restore tools: Offers integrated solutions for data protection.
  • Support for various storage types: Including NFS, iSCSI, Ceph, ZFS, LVM, and local directories.

Use Cases for Proxmox VE

  • Small to medium-sized businesses (SMBs) seeking a cost-effective and powerful virtualization solution.
  • Home labs and development/testing environments due to its flexibility and lack of licensing costs.
  • Hosting a variety of workloads such as web servers, databases, application servers, and network services.
  • Implementing private clouds and virtualized infrastructure.

Configuring NFS with Proxmox VE

Proxmox VE can easily integrate with NFS shares for storing VM disk images, ISO files, container templates, and backups.

  1. To add NFS storage in Proxmox VE, navigate to the “Datacenter” section in the web UI, then select “Storage”.
  2. Click the “Add” button and choose “NFS” from the dropdown menu.
  3. In the dialog box, provide the following:
    • ID: A unique name for this storage in Proxmox.
    • Server: The IP address or hostname of your NFS server.
    • Export: The exported directory path from the NFS server (e.g., /exports/data).
    • Content: Select the types of data you want to store on this NFS share (e.g., Disk image, ISO image, Container template, Backups).
  4. Adjust advanced options like NFS version if necessary, then click “Add”.

VMware vSphere with NFS

VMware vSphere is a comprehensive suite of virtualization products, with ESXi as the hypervisor and vCenter Server for centralized management. It is a widely adopted, enterprise-grade virtualization platform known for its robustness and extensive feature set.

Key Features of VMware vSphere

  • Robust and mature hypervisor (ESXi): Provides a stable and high-performance virtualization layer.
  • Advanced features: Includes vMotion (live migration of VMs), Storage vMotion (live migration of VM storage), Distributed Resource Scheduler (DRS) for load balancing, High Availability (HA) for automatic VM restart, and Fault Tolerance (FT) for continuous availability.
  • Comprehensive management with vCenter Server: A centralized platform for managing all aspects of the vSphere environment.
  • Strong ecosystem and third-party integrations: Wide support from hardware vendors and software developers.
  • Wide range of supported guest operating systems and hardware.
  • Advanced networking (vSphere Distributed Switch, NSX) and security features.

Use Cases for VMware vSphere

  • Enterprise data centers and hosting mission-critical applications requiring high availability and performance.
  • Large-scale virtualization deployments managing hundreds or thousands of VMs.
  • Virtual Desktop Infrastructure (VDI) deployments.
  • Implementing robust disaster recovery and business continuity solutions.
  • Building private, public, and hybrid cloud computing environments.

Configuring NFS with VMware vSphere

vSphere supports NFS version 3 and 4.1 for creating datastores. NFS datastores can be used to store virtual machine files (VMDKs), templates, and ISO images.

  1. Ensure your ESXi hosts have a VMkernel port configured for NFS traffic (typically on the management network or a dedicated storage network).
  2. Using the vSphere Client connected to vCenter Server (or directly to an ESXi host):
    1. Navigate to the host or cluster where you want to add the datastore.
    2. Go to the “Configure” tab, then select “Datastores” under Storage, and click “New Datastore”.
    3. In the New Datastore wizard, select “NFS” as the type of datastore.
    4. Choose the NFS version (NFS 3 or NFS 4.1). NFS 4.1 offers enhancements like Kerberos security.
    5. Enter a name for the datastore.
    6. Provide the NFS server’s IP address or hostname and the folder/share path (e.g., /vol/datastore1).
    7. Choose whether to mount the NFS share as read-only or read/write (default).
    8. Review the settings and click “Finish”.

NFS Performance Testing

Testing the performance of your NFS storage is crucial to ensure it meets the demands of your virtualized workloads and to identify potential bottlenecks before they impact production.

Why test NFS performance?

  • To validate that the NFS storage solution can deliver the required IOPS (Input/Output Operations Per Second) and throughput for your virtual machines.
  • To identify bottlenecks in the storage infrastructure, network configuration (switches, NICs, cabling), or NFS server settings.
  • To establish a performance baseline before making changes (e.g., software upgrades, hardware changes, network modifications) and to verify improvements after changes.
  • To ensure a satisfactory user experience for applications running on VMs that rely on NFS storage.
  • For capacity planning and to understand storage limitations.

Common tools for NFS performance testing

  • fio (Flexible I/O Tester): A powerful and versatile open-source I/O benchmarking tool that can simulate various workload types (sequential, random, different block sizes, read/write mixes). Highly recommended.
  • iozone: Another popular filesystem benchmark tool that can test various aspects of file system performance.
  • dd: A basic Unix utility that can be used for simple sequential read/write tests, but it’s less comprehensive for detailed performance analysis.
  • VM-level tools: Guest OS specific tools (e.g., CrystalDiskMark on Windows, or `fio` within a Linux VM) can also be used from within a virtual machine accessing the NFS datastore to measure performance from the application’s perspective.

What the test does (explaining a generic NFS performance test)

A typical NFS performance test involves a client (e.g., a Proxmox host, an ESXi host, or a VM running on one of these platforms) generating I/O operations (reads and writes) of various sizes and patterns (sequential, random) to files located on the NFS share. The primary goal is to measure:

  • Throughput: The rate at which data can be transferred, usually measured in MB/s or GB/s. This is important for large file transfers or streaming workloads.
  • IOPS (Input/Output Operations Per Second): The number of read or write operations that can be performed per second. This is critical for transactional workloads like databases or applications with many small I/O requests.
  • Latency: The time taken for an I/O operation to complete, usually measured in milliseconds (ms) or microseconds (µs). Low latency is crucial for responsive applications.

The test simulates different workload profiles (e.g., mimicking a database server, web server, or file server) to understand how the NFS storage performs under conditions relevant to its intended use.

Key metrics to observe

  • Read/Write IOPS for various block sizes (e.g., 4KB, 8KB, 64KB, 1MB).
  • Read/Write throughput (bandwidth) for sequential and random operations.
  • Average, 95th percentile, and maximum latency for I/O operations.
  • CPU utilization on both the NFS client (hypervisor or VM) and the NFS server during the test.
  • Network utilization and potential congestion points (e.g., packet loss, retransmits).

Steps to run a (generic) NFS performance test

  1. Define Objectives and Scope: Clearly determine what you want to measure (e.g., maximum sequential throughput, random 4K IOPS, latency under specific load). Identify the specific NFS share and client(s) for testing.
  2. Prepare the Test Environment:
    • Ensure the NFS share is correctly mounted on the test client(s).
    • Minimize other activities on the NFS server, client, and network during the test to get clean results.
    • Verify network connectivity and configuration (e.g., jumbo frames if used, correct VLANs).
  3. Choose and Install a Benchmarking Tool: For example, install `fio` on the Linux-based hypervisor (Proxmox VE) or a Linux VM.
  4. Configure Test Parameters in the Tool:
    • Test file size: Should be significantly larger than the NFS server’s cache and the client’s RAM to avoid misleading results due to caching (e.g., 2-3 times the RAM of the NFS server).
    • Block size (bs): Vary this to match expected workloads (e.g., bs=4k for database-like random I/O, bs=1M for sequential streaming).
    • Read/Write mix (rw): Examples: read (100% read), write (100% write), randread, randwrite, rw (50/50 read/write), randrw (50/50 random read/write), or specific mixes like rwmixread=70 (70% read, 30% write).
    • Workload type: Sequential (rw=read or rw=write) or random (rw=randread or rw=randwrite).
    • Number of threads/jobs (numjobs): To simulate concurrent access from multiple applications or VMs.
    • I/O depth (iodepth): Number of outstanding I/O operations, simulating queue depth.
    • Duration of the test (runtime): Run long enough to reach a steady state (e.g., 5-15 minutes per test case).
    • Target directory: Point to a directory on the mounted NFS share.
  5. Execute the Test: Run the benchmark tool from the client machine, targeting a file or directory on the NFS share.

Example fio command (conceptual for a random read/write test):

  1. (Note: /mnt/nfs_share_mountpoint should be replaced with the actual mount point of your NFS share. Parameters like size, numjobs, iodepth should be adjusted based on specific needs, available resources, and the NFS server’s capabilities. direct=1 attempts to bypass client-side caching.)
  2. Collect and Analyze Results: Gather the output from the tool (IOPS, throughput, latency figures). Also, monitor CPU, memory, and network utilization on both the client and the NFS server during the test using tools like top, htop, vmstat, iostat, nfsstat, sar, or platform-specific monitoring tools (Proxmox VE dashboard, ESXTOP).
  3. Document and Iterate: Record the test configuration and results. If performance is not as expected, investigate potential bottlenecks (NFS server tuning, network, client settings), make adjustments, and re-test to measure the impact of changes. Repeat with different test parameters to cover various workload profiles.

Conclusion

Both Proxmox VE and VMware vSphere offer robust support for NFS, providing flexible and scalable storage solutions for virtual environments. Understanding their respective key features, use cases, and configuration methods helps in architecting efficient virtualized infrastructures. Regardless of the chosen virtualization platform, performing diligent and methodical NFS performance testing is essential. It allows you to validate your storage design, ensure optimal operation, proactively identify and resolve bottlenecks, and ultimately guarantee that your storage infrastructure can effectively support the demands of your virtualized workloads and applications.


fio --name=nfs_randrw_test \
--directory=/mnt/nfs_share_mountpoint \
--ioengine=libaio \
--direct=1 \
--rw=randrw \
--rwmixread=70 \
--bs=4k \
--size=20G \
--numjobs=8 \
--iodepth=32 \
--runtime=300 \
--group_reporting \
--output=nfs_test_results.txt

Deploying a Hyper-V environment within VMware

Simply have a nested environment for educational purposes. This process involves creating a virtual machine inside VMware that runs Hyper-V as the hypervisor.

Here’s how to deploy Hyper-V within a VMware environment, along with a detailed network diagram and workflow:

Steps to Deploy Hyper-V in VMware

  1. Prepare VMware Environment:
    • Ensure your VMware platform (such as VMware vSphere) is fully set up and operational.
    • Verify BIOS settings on the physical host to ensure virtualization extensions (VT-x/AMD-V) are enabled.
  2. Create a New Virtual Machine in VMware:
    • Open vSphere Client or VMware Workstation (depending on your setup).
    • Create a new virtual machine with the appropriate guest operating system (usually Windows Server for Hyper-V).
    • Allocate sufficient resources (CPU, Memory) for the Hyper-V role.
    • Enable Nested Virtualization:
      • In VMware Workstation or vSphere, access additional CPU settings.
      • Check “Expose hardware assisted virtualization to the guest OS” for VMs running Hyper-V.
  3. Install Windows Server on the VM:
    • Deploy or install Windows Server within the newly created VM.
    • Complete initial configuration options, such as OS and network settings.
  4. Add Hyper-V Role:
    • Go to Server Manager in Windows Server.
    • Navigate to Add Roles and Features and select Hyper-V.
    • Follow the wizard to complete Hyper-V setup.
  5. Configure Virtual Networking for Hyper-V:
    • Open Hyper-V Manager to create and configure virtual switches connected to VMware’s virtual network interfaces.

Network Diagram

+-------------------------------------------------------------------------------------+
|                           VMware Platform (vSphere/Workstation)                     |
| +-------------------------------------+    +-------------------------------------+  |
| | Virtual Machine (VM) with Hyper-V   |    | Virtual Machine (VM) with Hyper-V   |  |
| | Guest OS: Windows Server 2016/2019  |    | Guest OS: Windows Server 2016/2019  |  |
| | +---------------------------------+ |    | +---------------------------------+ |  |
| | | Hyper-V Role Enabled            |------->| Hyper-V Role Enabled            | |  |
| | |                                 | |    | |                                 | |  |
| | | +-----------------------------+ | |    | | +-----------------------------+ | |  |
| | | | Hyper-V VM Guest OS 1      | | |    | | | Hyper-V VM Guest OS 2      | | | |  |
| | | +-----------------------------+ | |    | | +-----------------------------+ | |  |
| | +---------------------------------+ |    | +---------------------------------+ |  |
| +-------------------------------------+    +-------------------------------------+  |
|      |                                                                          |  |
|      +--------------------------------------------------------------------------+  |
|                                     vSwitch/Network                                |
+-------------------------------------------------------------------------------------+

Workflow

  1. VMware Layer:
    • Create Host Environment: Deploy and configure your VMware environment.
    • Nested VM Support: Ensure nested virtualization is supported and enabled on the host machine for VM creation and Hyper-V operation.
  2. VM Deployment:
    • Instantiate VMs for Hyper-V: Allocate enough resources for VMs that will act as your Hyper-V servers.
  3. Install Hyper-V Role:
    • Enable Hyper-V: Use Windows Server’s Add Roles feature to set up Hyper-V capabilities.
    • Hypervisor Management: Use Hyper-V Manager to create and manage new VMs within this environment.
  4. Networking:
    • Configure Virtual Networks: Set up virtual switches in Hyper-V that map to VMware’s virtual network infrastructure.
    • Network Bridging/VLANs: Potentially implement VLANs or bridge networks to handle separated traffic and conduct more intricate networking tasks.
  5. Management and Monitoring:
    • Integrate Hyper-V and VMware management tools.
    • Use VMware tools to track resource usage and performance metrics, alongside Hyper-V Manager for specific VM operations.

Considerations

  • Performance: Running Hyper-V nested on VMware introduces additional resource overhead. Ensure adequate hardware resources and consider the performance implications based on your workload requirements.
  • Licensing and Compliance: Validate licensing and compliance needs around Windows Server and Hyper-V roles.
  • Networking: Carefully consider network configuration on both hypervisor layers to avoid complexity and misconfiguration.

To create and distribute FSMO (Flexible Single Master Operations) roles in an Active Directory (AD) environment hosted on a Hyper-V platform (within VMware), you can use PowerShell commands. Here’s a detailed guide for managing FSMO roles:

Steps to Follow

1. Set up your environment:

  • Ensure the VMs in Hyper-V (running on VMware) have AD DS (Active Directory Domain Services) installed.
  • Verify DNS is properly configured and replication between domain controllers (DCs) is working.

2. Identify FSMO Roles:

The five FSMO roles in Active Directory are:

  • Schema Master
  • Domain Naming Master
  • PDC Emulator
  • RID Master
  • Infrastructure Master

These roles can be distributed among multiple domain controllers for redundancy and performance optimization.

3. Check Current FSMO Role Holders:

Use the following PowerShell command on any DC to see which server holds each role:

Get-ADForest | Select-Object SchemaMaster, DomainNamingMaster
Get-ADDomain | Select-Object PDCEmulator, RIDMaster, InfrastructureMaster

4. Transfer FSMO Roles Using PowerShell:

To distribute roles across multiple DCs, use the Move-ADDirectoryServerOperationMasterRole cmdlet. You need to specify the target DC and the role to transfer.

Here’s how you can transfer roles:

# Define the target DCs for each role
$SchemaMaster = "DC1"
$DomainNamingMaster = "DC2"
$PDCEmulator = "DC3"
$RIDMaster = "DC4"
$InfrastructureMaster = "DC5"

# Transfer roles
Move-ADDirectoryServerOperationMasterRole -Identity $SchemaMaster -OperationMasterRole SchemaMaster
Move-ADDirectoryServerOperationMasterRole -Identity $DomainNamingMaster -OperationMasterRole DomainNamingMaster
Move-ADDirectoryServerOperationMasterRole -Identity $PDCEmulator -OperationMasterRole PDCEmulator
Move-ADDirectoryServerOperationMasterRole -Identity $RIDMaster -OperationMasterRole RIDMaster
Move-ADDirectoryServerOperationMasterRole -Identity $InfrastructureMaster -OperationMasterRole InfrastructureMaster

Replace DC1, DC2, etc., with the actual names of your domain controllers.

5. Verify Role Transfer:

After transferring the roles, verify the new role holders using the Get-ADForest and Get-ADDomain commands:

Get-ADForest | Select-Object SchemaMaster, DomainNamingMaster
Get-ADDomain | Select-Object PDCEmulator, RIDMaster, InfrastructureMaster

6. Automate the Process:

If you want to automate the distribution of roles, you can use a script like this:

$Roles = @{
SchemaMaster = "DC1"
DomainNamingMaster = "DC2"
PDCEmulator = "DC3"
RIDMaster = "DC4"
InfrastructureMaster = "DC5"
}

foreach ($Role in $Roles.GetEnumerator()) {
Move-ADDirectoryServerOperationMasterRole -Identity $Role.Value -OperationMasterRole $Role.Key
Write-Host "Transferred $($Role.Key) to $($Role.Value)"
}

7. Test AD Functionality:

After distributing FSMO roles, test AD functionality:

  • Validate replication between domain controllers.
  • Ensure DNS and authentication services are working.
  • Use the dcdiag command to verify domain controller health.
dcdiag /c /v /e /f:"C:\dcdiag_results.txt"

Connecting Grafana to vCenter’s vPostgres Database

Integrating Grafana with the vPostgres database from vCenter allows you to visualize and monitor your VMware environment’s metrics and logs. Follow this detailed guide to set up and connect Grafana to your vCenter’s database.

Step 1: Enable vPostgres Database Access on vCenter

vPostgres on vCenter is restricted by default. To enable access:

  • SSH into the vCenter Server Appliance (VCSA):
    • Enable SSH via the vSphere Web Client or vCenter Console.
    • Connect via SSH:ssh root@
  • Access the Shell:
    • If not already in the shell, execute:shell
  • Enable vPostgres Remote Access:
    • Edit the vPostgres configuration file:vi /storage/db/vpostgres/postgresql.conf
    • Modify the listen_addresses:listen_addresses = '*'
    • Save and exit.
  • Configure Client Authentication:
    • Edit the pg_hba.conf file:vi /storage/db/vpostgres/pg_hba.conf
    • Add permission for the Grafana server:host all all /32 md5
    • Save and exit.
  • Restart vPostgres Service:service-control --restart vmware-vpostgres

Step 2: Retrieve vPostgres Credentials

  • Locate vPostgres Credentials:
    • The vcdb.properties file contains the necessary credentials:cat /etc/vmware-vpx/vcdb.properties
    • Look for username and password entries.
  • Test Database Connection Locally:psql -U vc -d VCDB -h localhost
    • Replace vc and VCDB with the actual username and database name found in vcdb.properties.

Step 3: Install PostgreSQL Client (Optional)

If required, install the PostgreSQL client on the Grafana host to test connectivity.

  • On Debian/Ubuntu:sudo apt install postgresql-client
  • On CentOS/RHEL:sudo yum install postgresql
  • Test the connection:psql -U vc -d VCDB -h

Step 4: Add PostgreSQL Data Source in Grafana

  • Log in to Grafana:
    • Open Grafana in your web browser:http://:3000
    • Default credentials: admin / admin.
  • Add a PostgreSQL Data Source:
    • Go to Configuration > Data Sources.
    • Click Add Data Source and select PostgreSQL.
  • Configure the Data Source:
    • Host:5432
    • DatabaseVCDB (or as found in vcdb.properties)
    • Uservc (or from vcdb.properties)
    • Password: As per vcdb.properties
    • SSL Modedisable (unless SSL is configured)
    • Save and test the connection.

Step 5: Create Dashboards and Queries

  • Create a New Dashboard:
    • Click the + (Create) button and select Dashboard.
    • Add a new panel.
  • Write PostgreSQL Queries:
    • Example query to fetch recent events:SELECT * FROM vpx_event WHERE create_time > now() - interval '1 day';
    • Customize as needed for specific metrics or logs (e.g., VM events, tasks, performance data).
  • Visualize Data:
    • Use Grafana’s visualization tools (e.g., tables, graphs) to display your data.

Step 6: Secure Access

  • Restrict vPostgres Access:
    • In the pg_hba.conf file, limit connections to just your Grafana server:host all all /32 md5
  • Use SSL (Optional):
    • Enable SSL in the postgresql.conf file:ssl = on
    • Use SSL certificates for enhanced security.
  • Change Default Passwords:
    • Update the vPostgres password for added security:psql -U postgres -c "ALTER USER vc WITH PASSWORD 'newpassword';"

This setup enables you to harness the power of Grafana for monitoring your VMware vCenter environment using its vPostgres database. Adjust configurations per your operational security standards and ensure that incoming connections are properly authenticated and encrypted. Adjust paths and configurations according to the specifics of your environment.

Retrieve all MAC addresses of NICs associated with ESXi hosts in a cluster

# Import VMware PowerCLI module
Import-Module VMware.PowerCLI

# Connect to vCenter
$vCenter = "vcenter.local"  # Replace with your vCenter server
$username = "administrator@vsphere.local"  # Replace with your vCenter username
$password = "yourpassword"  # Replace with your vCenter password

Connect-VIServer -Server $vCenter -User $username -Password $password

# Specify the cluster name
$clusterName = "ClusterName"  # Replace with the target cluster name

# Get all ESXi hosts in the specified cluster
$esxiHosts = Get-Cluster -Name $clusterName | Get-VMHost

# Loop through each ESXi host in the cluster
foreach ($host in $esxiHosts) {
    Write-Host "Processing ESXi Host: $($host.Name)" -ForegroundColor Cyan

    # Get all physical NICs (VMNICs) on the ESXi host
    $vmnics = Get-VMHostNetworkAdapter -VMHost $host | Where-Object { $_.NicType -eq "Physical" }

    # Get all VMkernel adapters on the ESXi host
    $vmkernelAdapters = Get-VMHostNetworkAdapter -VMHost $host | Where-Object { $_.NicType -eq "Vmkernel" }

    # Display VMNICs and their associated VMkernel adapters
    foreach ($vmnic in $vmnics) {
        $macAddress = $vmnic.Mac
        Write-Host "  VMNIC: $($vmnic.Name)" -ForegroundColor Green
        Write-Host "    MAC Address: $macAddress"

        # Check for associated VMkernel ports
        $associatedVmkernels = $vmkernelAdapters | Where-Object { $_.PortGroupName -eq $vmnic.PortGroupName }
        if ($associatedVmkernels) {
            foreach ($vmkernel in $associatedVmkernels) {
                Write-Host "    Associated VMkernel Adapter: $($vmkernel.Name)" -ForegroundColor Yellow
                Write-Host "      VMkernel IP: $($vmkernel.IPAddress)"
            }
        } else {
            Write-Host "    No associated VMkernel adapters." -ForegroundColor Red
        }
    }

    Write-Host ""  # Blank line for readability
}

# Disconnect from vCenter
Disconnect-VIServer -Confirm:$false




Sample Output :

Processing ESXi Host: esxi01.local
  VMNIC: vmnic0
    MAC Address: 00:50:56:11:22:33
    Associated VMkernel Adapter: vmk0
      VMkernel IP: 192.168.1.10

  VMNIC: vmnic1
    MAC Address: 00:50:56:44:55:66
    No associated VMkernel adapters.

Processing ESXi Host: esxi02.local
  VMNIC: vmnic0
    MAC Address: 00:50:56:77:88:99
    Associated VMkernel Adapter: vmk1
      VMkernel IP: 192.168.1.20

Exporting to a CSV (Optional)

If you want to save the results to a CSV file, modify the script as follows:

  1. Create a results array at the top:
$results = @()

Add results to the array inside the foreach loop:

$results += [PSCustomObject]@{
    HostName       = $host.Name
    VMNIC          = $vmnic.Name
    MACAddress     = $macAddress
    VMkernelAdapter = $vmkernel.Name
    VMkernelIP     = $vmkernel.IPAddress
}

Export the results at the end:

$results | Export-Csv -Path "C:\VMNIC_VMkernel_Report.csv" -NoTypeInformation