ESXi hosts provide fioioping, and esxtop for storage benchmarking directly from CLI, while vCenter PowerCLI aggregates performance across clusters/datastores. This script suite generates IOPS, latency, and throughput charts viewable in Confluence/HTML dashboards.

Core Testing Engine (fio-perf-test.sh)

Purpose: Run standardized fio workloads (4K random, 64K seq) on VMFS/NFS datastores.

bash#!/bin/bash
# fio-perf-test.sh - Run on ESXi via SSH
DATASTORE="/vmfs/volumes/$(esxcli storage filesystem list | grep -v Mounted | head -1 | awk '{print $1}')"
TEST_DIR="$DATASTORE/perf-test"
FIO_TEST="/usr/lib/vmware/fio/fio"

mkdir -p $TEST_DIR
cd $TEST_DIR

cat > fio-random-4k.yaml << EOF
[global]
ioengine=libaio
direct=1
size=1G
time_based
runtime=60
group_reporting
directory=$TEST_DIR

[rand-read]
rw=randread
bs=4k
numjobs=4
iodepth=32
filename=testfile.dat

[rand-write]
rw=randwrite
bs=4k
numjobs=4
iodepth=32
filename=testfile.dat
EOF

# Run tests
$FIO_TEST fio-random-4k.yaml > /tmp/fio-4k-results.txt
$FIO_TEST --name=seq-read --rw=read --bs=64k --size=4G --runtime=60 --direct=1 --numjobs=1 --iodepth=32 $TEST_DIR/testfile.dat >> /tmp/fio-seq-results.txt

# Cleanup
rm -rf $TEST_DIR/*
echo "$(hostname),$(date),$(grep read /tmp/fio-4k-results.txt | tail -1 | awk '{print $3}'),$(grep IOPS /tmp/fio-4k-results.txt | grep read | awk '{print $2}')" >> /tmp/storage-perf.csv

Cron Schedule0 2 * * 1 /scripts/fio-perf-test.sh (weekly baseline).

vCenter PowerCLI Aggregator (StoragePerf.ps1)

Purpose: Collects historical perf + runs live esxtop captures across all hosts.

powershell# StoragePerf.ps1 - vCenter Storage Performance Dashboard
Connect-VIServer vcenter.example.com

$Report = @()
$Clusters = Get-Cluster

foreach ($Cluster in $Clusters) {
    $Hosts = Get-VMHost -Location $Cluster
    foreach ($Host in $Hosts) {
        # Live esxtop data (requires esxtop installed)
        $Esxtop = Invoke-VMScript -VM $Host -ScriptText {
            esxtop -b -a -d 30 | grep -E 'DAVG|%LAT|IOPS' | tail -20
        } -GuestCredential (Get-Credential)

        # Historical datastore stats
        $Datastores = Get-Datastore -VMHost $Host
        foreach ($DS in $Datastores) {
            $Perf = $DS | Get-Stat -Stat "datastore.read.average","datastore.write.average" -MaxSamples 24 -Interval Min | 
                    Select @{N='Time';E={$_.Timestamp}}, @{N='ReadKBps';E={[math]::Round($_.Value,2)}}, @{N='WriteKBps';E={[math]::Round($_.Value,2)}}
            
            $Report += [PSCustomObject]@{
                Host = $Host.Name
                Datastore = $DS.Name
                FreeGB = [math]::Round($DS.FreeSpaceGB,1)
                ReadAvgKBps = ($Perf.ReadKBps | Measure -Average).Average
                WriteAvgKBps = ($Perf.WriteKBps | Measure -Average).Average
                EsxtopLatency = ($Esxtop | Select-String "DAVG" | Select-Object -Last 1).ToString().Split()[2]
            }
        }
    }
}

# Export CSV for charts
$Report | Export-Csv "StoragePerf-$(Get-Date -f yyyy-MM-dd).csv" -NoTypeInformation

# Generate HTML dashboard
$Report | ConvertTo-Html -Property Host,Datastore,FreeGB,ReadAvgKBps,WriteKBps,EsxtopLatency -Title "Storage Performance" | 
    Out-File "storage-dashboard.html"

Performance Chart Generator (perf-charts.py)

Purpose: Converts CSV data to interactive Plotly charts for Confluence.

python#!/usr/bin/env python3
# perf-charts.py - Generate HTML charts from CSV
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import sys

df = pd.read_csv(sys.argv[1])

# IOPS vs Latency scatter
fig1 = px.scatter(df, x='ReadAvgKBps', y='EsxtopLatency', 
                 size='FreeGB', color='Host', hover_name='Datastore',
                 title='Storage Read Performance vs Latency',
                 labels={'ReadAvgKBps':'Read KBps', 'EsxtopLatency':'Avg Latency (ms)'})

# Throughput bar chart
fig2 = px.bar(df, x='Datastore', y=['ReadAvgKBps','WriteAvgKBps'], 
              barmode='group', title='Read/Write Throughput by Datastore')

# Combined dashboard
fig = make_subplots(rows=2, cols=1, subplot_titles=('IOPS vs Latency', 'Read/Write Throughput'))
fig.add_trace(fig1.data[0], row=1, col=1)
fig.add_trace(fig2.data[0], row=2, col=1)
fig.add_trace(fig2.data[1], row=2, col=1)

fig.write_html('storage-perf-dashboard.html')
print("Charts saved: storage-perf-dashboard.html")

Usagepython3 perf-charts.py StoragePerf-2025-12-28.csv

Master Orchestrator (storage-benchmark.py)

Purpose: Runs fio tests on all ESXi hosts + generates dashboard.

python#!/usr/bin/env python3
import paramiko
import subprocess
import pandas as pd
from datetime import datetime

ESXI_HOSTS = ['esxi1.example.com', 'esxi2.example.com']
VCENTER = 'vcenter.example.com'

def run_fio(host):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username='root', password='your-esxi-password')

# Copy & run fio script
stdin, stdout, stderr = ssh.exec_command('wget -O /tmp/fio-test.sh https://your-confluence/scripts/fio-perf-test.sh && chmod +x /tmp/fio-test.sh && /tmp/fio-test.sh')
result = stdout.read().decode()
ssh.close()
return result

# Execute tests
perf_data = []
for host in ESXI_HOSTS:
print(f"Testing {host}...")
run_fio(host)
perf_data.append({'Host': host, 'TestTime': datetime.now()})

# Pull PowerCLI report
subprocess.run(['pwsh', '-File', 'StoragePerf.ps1'])

# Generate charts
subprocess.run(['python3', 'perf-charts.py', f'StoragePerf-{datetime.now().strftime("%Y-%m-%d")}.csv'])

print("Storage benchmark complete. View storage-perf-dashboard.html")

Confluence Chart Embedding

HTML Macro (paste storage-perf-dashboard.html content):

text{html}

{html}

CSV Table with Inline Charts:

text||Host||Datastore||Read IOPS||Latency||Chart||
|esxi1|datastore1|2450|2.3ms|![Read IOPS|width=150px,height=100px](storage-esxi1.png)|

Automated Dashboard Cronjob

bash#!/bin/bash
# /etc/cron.d/storage-perf
# Daily 3AM: Test + upload to Confluence
0 3 * * * root /usr/local/bin/storage-benchmark.py >> /var/log/storage-perf.log 2>&1

Output Files:

  • /tmp/storage-perf.csv → Historical trends
  • storage-perf-dashboard.html → Interactive Plotly charts
  • /var/log/storage-perf.log → Audit trail

Sample Output Charts

Expected Results (Tintri VMstore baseline):

textDatastore: tintri-vmfs-01
4K Random Read: 12,500 IOPS @ 1.8ms
4K Random Write: 8,200 IOPS @ 2.4ms
64K Seq Read: 450 MB/s
64K Seq Write: 380 MB/s

Pro Tips & Alerts

text☐ Alert if Latency > 5ms: Add to PowerCLI `if($EsxtopLatency -gt 5) {Send-MailMessage}`
☐ Tintri-specific: Add `esxtop` filter for Tintri LUN paths
☐ NFS tuning: Test with `nfs.maxqueuesize=8` parameter
☐ Compare baselines: Git commit CSV files weekly

Run Firstpython3 storage-benchmark.py --dry-run to validate hosts/configs.

Podcast also available on PocketCasts, SoundCloud, Spotify, Google Podcasts, Apple Podcasts, and RSS.

Leave a comment

The Podcast

Join Naomi Ellis as she dives into the extraordinary lives that shaped history. Her warmth and insight turn complex biographies into relatable stories that inspire and educate.

About the podcast