Object Store File System

OSFS (Object Store File System) enables VMFS volumes to be mounted as a single datastore for each host. Data on a VSAN datastore is stored in the form of data containers called objects, which is distributed across the cluster. An object can be a vmdk file, a snapshot, or the VM home folder. A reference object is also created and holds a VMFS volume and stores the virtual machine metadata files.

Logs for OSFS are captured in : /var/log/osfsd.log

How would you make a change in vSAN datastore with osfs-mkdir and osfs-rmdir?

It is not easy to create and remove any directory form VSAN  because the vSAN datastore is object based. In order to do that, special vSAN related commands exist to perform such tasks.

If you try and create a directory on VSAN you might get and error as :

# cd /vmfs/volumes/vsanDatastore

# mkdir TEST

mkdir: can’t create directory ‘TEST’: Function not implemented

How can we create/remove a directory ?

Step 1: Login to Esxi and access the folder …/bin/

# cd /usr/lib/vmware/osfs/bin

Step 2: List the contents 

# ls

objtool     osfs-ls     osfs-mkdir  osfs-rmdir  osfsd

Step 3: Verify that a directory called TEST does not exist

# ls -lh /vmfs/volumes/vsanDatastore/TEST

ls: /vmfs/volumes/vsanDatastore/TEST: No such file or directory

Step 4: Let’s create a Directory with the name TEST using osfs-mkdir

# ./osfs-mkdir /vmfs/volumes/vsanDatastore/TEST

54c0ba65-0c45-xxxx-b1f2-xxxxxxxxxxxx

Step 5: Verify that it exists

# ls -lh /vmfs/volumes/vsanDatastore/TEST

lrwxr-xr-x    1 root     root          12 Jan 09 21:03 /vmfs/volumes/vsanDatastore/TEST ->54c0ba65-0c45-xxxx-b1f2-xxxxxxxxxxxx

Step 6: Let’s try to Delete the directory now using osfs-rmdir

# ./osfs-rmdir /vmfs/volumes/vsanDatastore/TEST

Deleting directory 54c0ba65-0c45-xxxx-b1f2-xxxxxxxxxxxx in container id xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

backed by vsan (force=False)

Step 7: Verify that it has been removed

# ls -lh /vmfs/volumes/vsanDatastore/TEST

ls: /vmfs/volumes/vsanDatastore/TEST: No such file or directory

What is CLOMD in VSAN??

CLOMD (Cluster Level Object Manager Daemon) plays a key role in the operation of a vSAN cluster. It runs on every ESXi host and is responsible for new object creation, initiating repair of existing objects after failures, all types of data moves and evacuations (For example: Enter Maintenance Mode, Evacuate data on disk removal from vSAN), maintaining balance and thus triggering rebalancing, implementing policy changes, etc. 

It does not actually participate in the data path, but it triggers data path operations and as such is a critical component during a number of management workflows and failure handling scenarios. 

Virtual machine power on, or Storage vMotion to vSAN are two operations where CLOMD is required (and which are not that obvious), as those operations require the creation of a swap object, and object creation requires CLOMD. 

Similarly, starting with vSAN 6.0, memory snapshots are maintained as objects, so taking a snapshot with memory state will also require the CLOMD.

Cluster health – CLOMD liveness check :

This checks if the Cluster Level Object Manager (CLOMD) daemon is alive or not. It does so by first checking that the service is running on all ESXi hosts, and then contacting the service to retrieve run-time statistics to verify that CLOMD can respond to inquiries. 

Note: This does not ensure that all of the functionalities discussed above (For example: Object creation, rebalancing) actually work, but it gives a first level assessment as to the health of CLOMD.

CLOMD ERROR 

If any of the ESXi hosts are disconnected, the CLOMD liveness state of the disconnected host is shown as unknown. If the Health service is not installed on a particular ESXi host, the CLOMD liveness state of all the ESXi hosts is also reported as unknown.

If the CLOMD service is not running on a particular ESXi hosts, the CLOMD liveness state of one host is abnormal.

For this test to succeed, the health service needs to be installed on the ESXi host and the CLOMD service needs to be running. To get the state status of the CLOMD service, on the ESXi host, run this command:

/etc/init.d/clomd status

If the CLOMD health check is still failing after these steps or if the CLOMD health check continues to fail on a regular basis, open a support request with VMware Support.

Examples:

++++++++

In the /var/run/log/clomd.log file, you see logs similar to:

2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMProcessWorkItem: Op REPAIR starts:1804289387 
2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMReconfigure: Reconfiguring aae9cf268-cd5e-abc4-448d-050010d45c96 workItem type REPAIR 
2017-04-19T03:59:32.408Z 120360 (482850097440)(opID:1804289387)CLOMReplacementPreWorkRepair: Repair needed. 1 absent/degraded data components for ae9cf268-cd5e-abc4-448d-050010d45c96 found 

^^^ Here, CLOMD crashed while attempting to repair object with UUID ae9cf268-cd5e-abc4-448d-050010d45c96 . The vSAN health check will report CLOMD liveness issue. A CLOMD restart will fail because each time it is restarted, it will fail again while attempting to repair the 0 sized object. Swap objects can be the only vSAN objects that can be zero sized, so this issue can occus only with swap objects.

Core Dumps for VSAN

If your vSAN cluster uses encryption, and if an error occurs on the ESXi host, the resulting core dump is encrypted to protect customer data. Core dumps which is present in the vm-support package are also encrypted.

Note:

Core dumps can contain sensitive information. Check with your Data Security Team and Privacy Policy when handling core dumps.

Core Dumps on ESXi Hosts

When an ESXi host crashes, an encrypted core dump is generated and the host reboots. The core dump is encrypted with the host key that is in the ESXi key cache.

  • In most cases, vCenter Server retrieves the key for the host from the KMS and attempts to push the key to the ESXi host after reboot. If the operation is successful, you can generate the vm-support package and you can decrypt or re-encrypt the core dump.

  • If vCenter Server cannot access the ESXi host, you might be able to retrieve the key from the KMS.

  • If the host used a custom key, and that key differs from the key that vCenter Server pushes to the host, you cannot manipulate the core dump. Avoid using custom keys.

Core Dumps and vm-support Packages

When you contact VMware Technical Support because of a serious error, your support representative usually asks you to generate a vm-supportpackage. The package includes log files and other information, including core dumps. If support representatives cannot resolve the issues by looking at log files and other information, you can decrypt the core dumps to make relevant information available. Follow your organization’s security and privacy policy to protect sensitive information, such as host keys.

Core Dumps on vCenter Server Systems

A core dump on a vCenter Server system is not encrypted. vCenter Server already contains potentially sensitive information. At the minimum, ensure that the Windows system on which vCenter Server runs or the vCenter Server Appliance is protected. You also might consider turning off core dumps for the vCenter Server system. Other information in log files can help determine the problem.

vSAN issue fixed in 6.5 release

  • An ESXi host fails with purple diagnostic screen when mounting a vSAN disk group :Due to an internal race condition in vSAN, an ESXi host might fail with a purple diagnostic screen when you attempt to mount a vSAN disk group.This issue is resolved in this release.
  • Using objtool on a vSAN witness host causes an ESXi host to fail with a purple diagnostic screen : If you use objtool on a vSAN witness host, it performs an I/O control (ioctl) call which leads to a NULL pointer in the ESXi host and the host crashes.This issue is resolved in this release.
  • Hosts in a vSAN cluster have high congestion which leads to host disconnects :When vSAN components with invalid metadata are encountered while an ESXi host is booting, a leak of reference counts to SSD blocks can occur. If these components are removed by policy change, disk decommission, or other method, the leaked reference counts cause the next I/O to the SSD block to get stuck. The log files can build up, which causes high congestion and host disconnects.This issue is resolved in this release.
  • Cannot enable vSAN or add an ESXi host into a vSAN cluster due to corrupted disks :When you enable vSAN or add a host to a vSAN cluster, the operation might fail if there are corrupted storage devices on the host. Python zdumps are present on the host after the operation, and the vdq -q command fails with a core dump on the affected host.This issue is resolved in this release.
  • vSAN Configuration Assist issues a physical NIC warning for lack of redundancy when LAG is configured as the active uplink :When the uplink port is a member of a Link Aggregation Group (LAG), the LAG provides redundancy. If the Uplink port number is 1, vSAN Configuration Assist issues a warning that the physical NIC lacks redundancy.This issue is resolved in this release.
  • vSAN cluster becomes partitioned after the member hosts and vCenter Server reboot :If the hosts in a unicast vSAN cluster and the vCenter Server are rebooted at the same time, the cluster might become partitioned. The vCenter Server does not properly handle unstable vpxd property updates during a simultaneous reboot of hosts and vCenter Server.This issue is resolved in this release.
  • An ESXi host fails with a purple diagnostic screen due to incorrect adjustment of read cache quota :The vSAN mechanism to that controls read cache quota might make incorrect adjustments that result in a host failure with purple diagnostic screen.This issue is resolved in this release.
  • Large File System overhead reported by the vSAN capacity monitor :When deduplication and compression are enabled on a vSAN cluster, the Used Capacity Breakdown (Monitor > vSAN > Capacity) incorrectly displays the percentage of storage capacity used for file system overhead. This number does not reflect the actual capacity being used for file system activities. The display needs to correctly reflect the File System overhead for a vSAN cluster with deduplication and compression enabled.This issue is resolved in this release.
  • vSAN health check reports CLOMD liveness issue due to swap objects with size of 0 bytes :If a vSAN cluster has objects with size of 0 bytes, and those objects have any components in need of repair, CLOMD might crash. The CLOMD log in /var/run/log/clomd.log might display logs similar to the following:

2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMProcessWorkItem: Op REPAIR starts:1804289387
2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMReconfigure: Reconfiguring ae9cf658-cd5e-dbd4-668d-020010a45c75 workItem type REPAIR 
2017-04-19T03:59:32.408Z 120360 (482850097440)(opID:1804289387)CLOMReplacementPreWorkRepair: Repair needed. 1 absent/degraded data components for ae9cf658-cd5e-dbd4-668d-020010a45c75 found   

  • The vSAN health check reports a CLOMD liveness issue. Each time CLOMD is restarted it crashes while attempting to repair the affected object. Swap objects are the only vSAN objects that can have size of zero bytes.

This issue is resolved in this release.

  • vSphere API FileManager.DeleteDatastoreFile_Task fails to delete DOM objects in vSAN :If you delete vmdks from the vSAN datastore using FileManager.DeleteDatastoreFile_Task API, through filebrowser or SDK scripts, the underlying DOM objects are not deleted.These objects can build up over time and take up space on the vSAN datastore.This issue is resolved in this release.
  • A host in a vSAN cluster fails with a purple diagnostic screen due to internal race condition :When a host in a vSAN cluster reboots, a race condition might occur between PLOG relog code and vSAN device discovery code. This condition can corrupt memory tables and cause the ESXi host to fail and display a purple diagnostic screen.This issue is resolved in this release.
  • Attempts to install or upgrade an ESXi host with ESXCLI or vSphere PowerCLI commands might fail for esx-base, vsan and vsanhealth VIBsFrom ESXi 6.5 Update 1 and above, there is a dependency between the esx-tboot VIB and the esx-base VIB and you must also include the esx-tboot VIB as part of the vib update command for successful installation or upgrade of ESXi hosts.Workaround: Include also the esx-tboot VIB as part of the vib update command. For example:esxcli software vib update -n esx-base -n vsan -n vsanhealth -n esx-tboot -d /vmfs/volumes/datastore1/update-from-esxi6.5-6.5_update01.zip

Configure vSAN Stretched Cluster

Stretched clusters extend the vSAN cluster from a single data site to two sites for a higher level of availability and intersite load balancing. Stretched clusters are typically deployed in environments where the distance between data centers is limited, such as metropolitan or campus environments.

You can use stretched clusters to manage planned maintenance and avoid disaster scenarios, because maintenance or loss of one site does not affect the overall operation of the cluster. In a stretched cluster configuration, both data sites are active sites. If either site fails, vSAN uses the storage on the other site. vSphere HA restarts any VM that must be restarted on the remaining active site.

Configure a vSAN cluster that stretches across two geographic locations or sites.

Prerequisites

  • Verify that you have a minimum of three hosts: one for the preferred site, one for the secondary site, and one host to act as a witness.

  • Verify that you have configured one host to serve as the witness host for the stretched cluster. Verify that the witness host is not part of the vSAN cluster, and that it has only one VMkernel adapter configured for vSAN data traffic.

  • Verify that the witness host is empty and does not contain any components. To configure an existing vSAN host as a witness host, first evacuate all data from the host and delete the disk group.

Procedure

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Fault Domains and Stretched Cluster.
  4. Click the Stretched Cluster Configure button to open the stretched cluster configuration wizard.
  5. Select the fault domain that you want to assign to the secondary site and click >>.

    The hosts that are listed under the Preferred fault domain are in the preferred site.

  6. Click Next.
  7. Select a witness host that is not a member of the vSAN stretched cluster and click Next.
  8. Claim storage devices on the witness host and click Next.

    Claim storage devices on the witness host. Select one flash device for the cache tier, and one or more devices for the capacity tier.

  9. On the Ready to complete page, review the configuration and click Finish.

 

You can change the witness host for a vSAN stretched cluster.

Change the ESXi host used as a witness host for your vSAN stretched cluster.

Prerequisites

Verify that the witness host is not in use.

Procedure

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Fault Domains and Stretched Cluster.
  4. Click the Change witness host button.
  5. Select a new host to use as a witness host, and click Next.
  6. Claim disks on the new witness host, and click Next.
  7. On the Ready to complete page, review the configuration, and click Finish.

 

You can configure the secondary site as the preferred site. The current preferred site becomes the secondary site.

Procedure

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Fault Domains and Stretched Cluster.
  4. Select the secondary fault domain and click the Mark Fault Domain as preferred for Stretched Cluster icon ().
  5. Click Yes to confirm.

    The selected fault domain is marked as the preferred fault domain.

vSAN Prerequisites and Requirements for Deployment

Before delving into the installation and configuration of vSAN, it’s necessary to discuss the requirements and the prerequisites. VMware vSphere is the foundation of every vSAN based virtual infrastructure.

VMware vSphere:
+++++++++++++++

vSAN was first released with VMware vSphere 5.5 U1. Additional versions of vSAN were released with VMware vSphere 6.0 (vSAN 6.0), VMware vSphere 6.0 U1 (vSAN 6.1), and VMware vSphere 6.0 U2 (vSAN 6.2). Each of these releases included additional vSAN features.

VMware vSphere consists of two major components: the vCenter Server management tool and the ESXi hypervisor. To install and configure vSAN, both vCenter Server and ESXi are required.
VMware vCenter Server provides a centralized management platform for VMware vSphere environments. It is the solution used to provision new virtual machines (VMs), configure hosts, and perform many other operational tasks associated with managing a virtualized infrastructure.
To run a fully supported vSAN environment, the vCenter server 5.5 U1 platform is the minimum requirement, although VMware strongly recommends using the latest version of vSphere where possible. vSAN can be managed by both the Windows version of vCenter server and the vCenter Server appliance (VCSA). vSAN is configured and monitored via the vSphere web client, and this also needs a minimum version of 5.5 U1 for support. vSAN can also be fully configured and managed through the command-line interface (CLI) and the vSphere application programming interface (API) for those wanting to automate some (or all) of the aspects of vSAN configuration, monitoring, or management. Although a single cluster can contain only one vSAN datastore, a vCenter server can manage multiple vSAN and compute clusters.

ESXi:
+++++

VMware ESXi is an enterprise-grade virtualization product that allows you to run multiple instances of an operating system in a fully isolated fashion on a single server. It is a baremetal solution, meaning that it does not require a guest-OS and has an extremely thin footprint. ESXi is the foundation for the large majority of virtualized environments worldwide. For standard datacenter deployments, vSAN requires a minimum of three ESXi hosts (where
each host has local storage and is contributing this storage to the vSAN datastore) to form a supported vSAN cluster. This is to allow the cluster to meet the minimum availability requirements of tolerating at least one host failure.

With vSAN 6.1 (released with vSphere 6.0 U1), VMware introduced the concept of a 2-node vSAN cluster primarily for remote office/branch office deployments. There are some additional considerations around the use of a 2-node vSAN cluster, including the concept of a witness host. As of vSAN 6.0 a maximum of 64 ESXi hosts in a cluster is supported, a significant increase from the 32 hosts that were supported in the initial vSAN release that was part of vSphere
5.5, from here on referred to as vSAN 5.5. The ESXi hosts must be running version 6.0 at a minimum to support 64 hosts however. At a minimum, it is recommended that a host have at least 6 GB of memory. If you configure a host to contain the maximum number of disk groups, we recommend that the host be configured with a minimum of 32 GB of memory. vSAN does not consume all of this memory, but it is required for the maximum configuration. The vSAN host memory requirement is directly related to the number of physical disks in the host and the number of disk groups configured on the host.In all cases we recommend to go with more than 32 GB per host to ensure that your workloads, vSAN and the hypervisor have sufficient resources to ensure an optimal user experience. Below is the Diagram for Minimum host contributing storage :

Cache and Capacity Devices:
+++++++++++++++++++++++++++
With the release of vSAN 6.0, VMware introduced the new all-flash version of vSAN. vSAN was only available as a hybrid configuration with version 5.5. A hybrid configuration is where the cache tier is made up of flash-based devices and the capacity tier is made up of magnetic disks. In the all-flash version, both the cache tier and capacity tier are made up of flash devices. The flash devices of the cache and capacity tier are typically a different grade of flash device in terms of performance and endurance. This allows you, under certain circumstances, to create all-flash configurations at the cost of SAS-based magnetic disk configurations.

 

vSAN Requirements:
++++++++++++++++++
Before enabling vSAN, it is highly recommended that the vSphere administrator validate that the environment meets all the prerequisites and requirements. To enhance resilience, this list also includes recommendations from an infrastructure perspective:
>>Minimum of three ESXi hosts for standard datacenter deployments. Minimum of two ESXi hosts and a witness host for the smallest deployment, for example, remote office/branch office.
>>Minimum of 6 GB memory per host to install ESXi.
>>VMware vCenter Server.
>>At least one device for the capacity tier. One hard disk for hosts contributing storage to vSAN datastore in a hybrid configuration; one flash device for hosts contributing storage to vSAN datastore in an all-flash configuration.
>>At least one flash device for the cache tier for hosts contributing storage to vSAN datastore, whether hybrid or all-flash.
>>One boot device to install ESXi.
>>At least one disk controller. Pass-through/JBOD mode capable disk controller preferred.
>>Dedicated network port for vSAN–VMkernel interface. 10 GbE preferred, but 1 GbE supported for smaller hybrid configurations. With 10 GbE, the adapter does not need to be dedicated to vSAN traffic, but can be shared with other traffic types, such as management traffic, vMotion traffic, etc.
>>L3 multicast is required on the vSAN network.

vSAN Ready Nodes:
++++++++++++++++++
vSAN ready nodes are a great alternative to manually selecting components. Ready nodes would also be the preferred way of building a vSAN configuration. Various vendors have gone through the exercise for you and created configurations that are called vSAN ready nodes. These nodes consist of tested and certified hardware only and, in our opinion,provide an additional guarantee.

For more information please follow : https://www.vsan-essentials.com/

vSAN Performance Capabilities

It is difficult to predict what your performance will be because every workload and every combination of hardware will provide different results. After the initial vSAN launch, VMware announced the results of multiple performance tests
(http://blogs.vmware.com/vsphere/2014/03/supercharge-virtual-san-cluster-2-millioniops.html).

The results Vmwarere impressive, to say the least, but Vmwarere only the beginning. With the 6.1 release, performance of hybrid had doubled and so had the scale, allowing for 8 million IOPS per cluster. The introduction of all-flash hoVmwarever completely changed the game. This alloVmwared vSAN to reach 45k IOPS per diskgroup, and remember you can have 5 per disk group, but it also introduced sub millisecond latency. (Just for completeness sake, theoretically it would be possible to design a vSAN cluster that could deliver over 16 million IOPS with sub millisecond latency using an all-flash configuration.)

Do note that these performance numbers should not be used as a guarantee for what you can achieve in your environment. These are theoretical tests that are not necessarily (and most likely not) representative of the I/O patterns you will see in your own environment (and so results will vary). Nevertheless, it does prove that vSAN is capable of delivering a high performance environment. At the time of writing the latest performance document available is for vSAN 6.0, which can be found here:
http://www.vmware.com/files/pdf/products/vsan/VMware-Virtual-San6-ScalabilityPerformance-Paper.pdf.

Vmware highly recommend hoVmwarever to search for the latest version as Vmware are certain that there will be an updated version with the 6.2 release of vSAN. One thing that stands out though when reading these types of papers is that all performance tests and reference architectures by VMware that are publicly available have been done with 10 GbE networking configurations. For our design scenarios, Vmware will use 10 GbE as the golden standard because it is heavily recommended by VMware and increases throughput and loVmwarers latency. The only configuration where this does not apply is ROBO (remote office/branch office). This 2-node vSAN configuration is typically deployed using 1 GbE since the number of VMs running is typically relatively low (up to 20 in total). Different configuration options for networking, including the use of Network I/O Control.

VSAN 6.6 : New Features

VMware vSAN release was just announced, namely vSAN 6.6

There are many new features which were pushed on the latest release ,few of them are listed below:

1: vSAN Encryption

Encryption in vSAN 6.6 takes places at the lowest level, meaning that you can also get the benefits of dedupe and compression. vSAN encryption is enabled at the cluster level, but It is implemented at the physical disk layer, so that each disk has its own key provided by a supported Key Management Server (KMS).For customers running all-flash vSAN however there was one big disadvantage and that is that encryption happens at the highest level meaning that the IO is encrypted when it reaches the write buffer and is moved to the capacity tier.

vCenter instance object –> Configure tab –> More / Key Management Servers.

2: Local Protection in vSAN Stretched Cluster

There are now two protection policies; Primary level of failures to tolerate (PFTT) and Secondary level of failures to tolerate (SFTT). For stretched cluster, PFTT defines cross site protection, implemented as RAID-1. For stretched cluster, SFTT defines local site protection. SFTT can be implemented as RAID-1, RAID-5 and RAID-6. 

3: Unicast Mode :

If you are upgrading from a previous version of vSAN, vSAN will automatically switch to unicast once all hosts have been upgraded to vSAN 6.6. Now there is a catch to it ,if the on-disk format has not been upgraded to the latest version 5, and a pre-vSAN 6.6 host is added to the cluster, then the cluster reverts to multicast.Here is what you see through the client:

Command you can use is : 

esxcli vsan cluster unicastagent list

4: Resync Throttling :

In the past, if a resync process was interrupted, the resync may need to start all over again. Now in vSAN 6.6, resync activity will resume from where it left off (if interrupted) by using a new resync bitmap to track changes.

5: pre-checks for maintenance mode :

It point out on the data present in the disk group.

Warning message : Data on the disk from the disk group xxxxxxxxx will be deleted . Unless the data on the disks is evacuated first,removing the disks might disrupt working VMs.

Three options but it has all the information in details what we need to understand:

  • Evacuate all data to other host — > It will let you know the amount of data that will be moved to other hosts .
  • Ensure data accessibility from other hosts –>No data will be moved
  • No data evacualtion –> No data will be moved form the location.

6: HTML5 Host Client Integration :

This one is the best and was much awaited feature on VSAN.

For more reference please follow :

 

New esxcli commands for VSAN

A new esxcli command to assist with troubleshooting has also been added:

esxcli vsan debug
Usage: esxcli vsan debug {cmd} [cmd options]

Available Namespaces:

 disk Debug commands for vSAN physical disks
 object Debug commands for vSAN objects
 resync Debug commands for vSAN resyncing objects
 controller Debug commands for vSAN disk controllers
 limit Debug commands for vSAN limits
 vmdk Debug commands for vSAN VMDKs

As well as the esxcli vsan debug command, we also added the following commands in vSAN 6.6 information to get troubleshooting information:

 • esxcli vsan health cluster
 • esxcli vsan resync bandwidth
 • esxcli vsan resync throttle
 • esxcli vsan cluster unicastagent list
Example 1:
++++++++++
Use "vsan debug vmdk" command to check all of VMDKs status:
 
 [root@TestVSAN:~] esxcli vsan debug disk list
 UUID: 52bc7654-xxxx-xxxx-xxxx-54cf6cda4368
    Name: naa.xxx
    SSD: True
    Overall Health: green
    Congestion Health:
          State: green
          Congestion Value: 0
          Congestion Area: none
    In Cmmds: true
    In Vsi: true
    Metadata Health: green
    Operational Health: green
    Space Health:
          State: green
          Capacity: 0 bytes
          Used: 0 bytes
          Reserved: 0 bytes
 
Example 2:
++++++++++
 [root@TestVSAN:~] esxcli vsan cluster unicastagent list
NodeUuid IsWitness Supports Unicast IP Address Port Iface Name
------------------------------------ --------- ---------------- ------------- ----- ----------
52e8ac54-xxxx-xxxx-xxxx-54cf6cda4368 0 true 10.10.0.111 12321
52e8ac78-xxxx-xxxx-xxxx-98cf6xas5345 0 true 10.10.0.112 12321
52e8ac21-xxxx-xxxx-xxxx-56ab6cba4368 0 true 10.10.0.113 12321