What is Rebalance of Objects in VSAN and how does it work ?

Why do we need rebalance in VSAN cluster?

When any capacity device in your cluster reaches 80 percent utilization, Virtual SAN automatically rebalances the cluster, until the utilization of all capacity devices is below the threshold.

Cluster rebalancing evenly distributes resources across the cluster to maintain consistent performance and availability.

Other operations can initiate cluster rebalancing:

  • If Virtual SAN detects hardware failures on the cluster
  • If Virtual SAN hosts are placed in maintenance mode with the Full data migration option
  • If Virtual SAN hosts are placed in maintenance mode with Ensure accessibility when objects assigned FTT=0 reside on the host.

Note:

To provide enough space for maintenance and reprotection, and to minimize automatic rebalancing events in the Virtual SAN cluster, consider keeping 30-percent capacity available at all times.

Cluster rebalance can be done in two ways :

1: Automatic Rebalance

2: Manual Rebalance

Let us know how the Automatic Rebalance works:

By default, Virtual SAN automatically rebalances the Virtual SAN cluster when a capacity device reaches 80 percent utilization. Rebalancing also occurs when you place a Virtual SAN host in maintenance mode.

Run the following RVC commands to monitor the rebalance operation in the cluster:

  • vsan.check_limits. Verifies whether the disk space utilization is balanced in the cluster.
  • vsan.whatif_host_failures. Analyzes the current capacity utilization per host, interprets whether a single host failure can force the cluster to run out of space for reprotection, and analyzes how a host failure might impact cluster capacity, cache reservation, and cluster components.
    The physical capacity usage shown as the command output is the average usage of all devices in the Virtual SAN cluster.
  • vsan.resync_dashboard. Monitors any rebuild tasks in the cluster.   

You can Manually rebalance through the cluster health check, or by using RVC commands.

If the Virtual SAN disk balance health check fails, you can initiate a manual rebalance in the vSphere Web Client. Under Cluster health, access the Virtual SAN Disk Balance health check, and click the Rebalance Disks button.

Use the following RVC commands to manually rebalance the cluster:

  • vsan.check_limits. Verifies whether any capacity device in the Virtual SAN cluster is approaching the 80 percent threshold limit.
  • vsan.proactive_rebalance [opts]<Path to ClusterComputeResource> –start. Manually starts the rebalance operation. When you run the command, Virtual SAN scans the cluster for the current distribution of components and begins to balance the distribution of components in the cluster. Use the command options to specify how long to run the rebalance operation in the cluster, and how much data to move each hour for each Virtual SAN host. For more information about the command options for managing the rebalance operation in the Virtual SAN cluster, see the RVC Command Reference Guide.
    Because cluster rebalancing generates substantial I/O operations, it can be time-consuming and can affect the performance of virtual machines.

What is Resync of Objects in VSAN?

You can monitor the status of virtual machine objects that are being resynchronized in the Virtual SAN cluster.

The following events trigger resynchronization in the cluster:

  • Editing a virtual machine (VM) storage policy. When you change VM storage policy settings, Virtual SAN might initiate object recreation and subsequent resynchronization of the objects.
    Certain policy changes might cause Virtual SAN to create another version of an object and synchronize it with the previous version. When the synchronization is complete, the original object is discarded.
    Virtual SAN ensures that VMs continue to run and are not interrupted by this process. This process might require additional temporary capacity.
  • Restarting a host after a failure.
  • Recovering hosts from a permanent or long-term failure. If a host is unavailable for more than 60 minutes (by default), Virtual SAN creates copies of data to recover the full policy compliance.
  • Evacuating data by using the Full data migration mode before you place a host in maintenance mode.
  • Exceeding the utilization threshold of a capacity device. Resynchronization is triggered when capacity device utilization in the Virtual SAN cluster that approaches or exceeds the threshold level of 80 percent.

To evaluate the status of objects that are being resynchronized, you can monitor the resynchronization tasks that are currently in progress.

Prerequisites:

Verify that hosts in your Virtual SAN cluster are running ESXi 6.0 or later.

Procedure:

  1. Navigate to the Virtual SAN cluster in the vSphere Web Client.
  2. Select the Monitor tab and click Virtual SAN.
  3. Select Resyncing Components to track the progress of resynchronization of virtual machine objects and the number of bytes that are remaining before the resynchronization is complete.You can also view information about the number of objects that are currently being synchronized in the cluster, the estimated time to finish the resynchronization, the time remaining for the storage objects to fully comply with the assigned storage policy, and so on.If your cluster has connectivity issues, the data on the Resyncing Components page might not get refreshed as expected and the fields might reflect inaccurate information.

Best practices to upgrade VSAN

Before you attempt to upgrade vSAN, verify that your environment meets the vSphere hardware and software requirements.

1: Verify that vSAN 6.6 supports the software and hardware components, drivers, firmware, and storage I/O controllers that you plan on using. Supported items are listed on the VMware Compatibility Guide website at http://www.vmware.com/resources/compatibility/search.php.

2: Verify that you have enough space available to complete the software version upgrade. The amount of disk storage needed for the vCenter Server installation depends on your vCenter Server configuration.

3: Verify that you have enough capacity available to upgrade the disk format. If free space equal to the consumed capacity of the largest disk group is not available, with the space available on disk groups other than the disk groups that are being converted, you must choose Allow reduced redundancy as the data migration option.

4: Verify that you have placed the vSAN hosts in maintenance mode and selected the Ensure data accessibility or Evacuate all data option.

5: Verify that you have backed up your virtual machines.

6: All vSAN disks should be healthy

      >> No disk should be failed or absent

      >> This can be determined via the vSAN Disk Management view in the vSphere Web Client.

7: There should be no inaccessible vSAN objects

     >>Use RVC to determine this: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-ruby-vsphere-console-command-reference-for-virtual-san.pdf

8: There should not be any active resync at the start of the upgrade process.

After starting a vSAN Cluster upgrade :

1: Do not attempt to upgrade a cluster by introducing new versions to the cluster and migrating workloads.

2: If you are adding or replacing disks in the midst of an upgrade, ensure that they are formatted with the appropriate legacy on-disk format version : How to format vSAN Disk Groups with a legacy format version (2146221).

Recommendations:

Consider the following recommendations when deploying ESXi hosts for use with vSAN:

1: If ESXi hosts are configured with memory capacity of 512 GB or less, use SATADOM, SD, USB, or hard disk devices as the installation media.
2: If ESXi hosts are configured with memory capacity greater than 512 GB, use a separate magnetic disk or flash device as the installation device. If you are using a separate device, verify that vSAN is not claiming the device.
3: When you boot a vSAN host from a SATADOM device, you must use a single-level cell (SLC) device and the size of the boot device must be at least 16 GB.
4:To ensure your hardware meets the requirements for vSAN, refer to Hardware Requirements for vSAN.

Please check the vSAN upgrade requirements (2145248) : https://kb.vmware.com/s/article/2145248

Remove Disk Partition on VSAN

If you have added a device that contains residual data or partition information, you must remove all preexisting partition information from the device before you can claim it for vSAN use. VMware recommends adding clean devices to disk groups.

When you remove partition information from a device, vSAN deletes the primary partition that includes disk format information and logical partitions from the device.

Prerequisites:

+++++++++++

Verify that the device is not in use by ESXi as boot disk, VMFS datastore, or vSAN.

Procedure:

++++++++

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Disk Management.
  4. Select a host to view the list of available devices.
  5. From the Show drop-down menu at the bottom of the page, select Ineligible.
  6. Select a device from the list, and click the Erase partitions on the selected disks icon
  7. Click OK to confirm.

If vCenter Server is inAccessible from vSphere Web Client below are the steps to manually remove and recreate a vSAN disk group using the ESX Command Line Interface (esxcli).

To remove and recreate a disk group using esxcli commands:

Note: These steps can be data destructive if not followed carefully.

  1. Log in to the ESXi host that owns the disk group as the root user using SSH.
  2. Run this one of these commands to put the host in Maintenance mode. There are 3 options:

    Note: VMware recommends using the ensureObjectAccessibility option. Failure to use this ensureObjectAccessibility mode or evacuateAllData mode may result in data loss.

    • Recommended:
      • Ensure accessibility of data:
        esxcli system maintenanceMode set –enable true -m ensureObjectAccessibility
      • Evacuate data:
        esxcli system maintenanceMode set –enable true -m evacuateAllData
    • Not recommended:
      • Don’t evacuate data:
        esxcli system maintenanceMode set –enable true -m noAction
  3. Record the cache and capacity disk ids in the existing group by running this command:
    esxcli vsan storage list

    Example output of a capacity tier device:
    naa.123456XXXXXXXXXXX:
    Device: naa.123456XXXXXXXXXXX
    Display Name: naa.123456XXXXXXXXXXX
    Is SSD: true
    VSAN UUID: xxxxxxxx-668b-ec68-b632-xxxxxxxxxxxx
    VSAN Disk Group UUID: xxxxxxxx-17c6-6f42-d10b-xxxxxxxxxxxx
    VSAN Disk Group Name: naa.50000XXXXX1245
    Used by this host: true
    In CMMDS: true
    On-disk format version: 5
    Deduplication: true
    Compression: true
    Checksum: 598612359878654763
    Checksum OK: true
    Is Capacity Tier: true
    Encryption: false
    DiskKeyLoaded: false

    Note: For a cache disk:

    • the VSAN UUID and VSAN Disk Group UUID fields will match
    • Output will report: Is Capacity Tier: false
  4. Then remove the disk group

    esxcli vsan storage remove -u uuid

    Note: Always double check the disk group UUID with the command:
    esxcli vsan storage list

  5. If you have replaced physical disks, see the Additional Information section.
  6. Create the disk group, using this command:
    esxcli vsan storage add -s naa.xxxxxx -d naa.xxxxxxx -d naa.xxxxxxxxxx -d naa.xxxxxxxxxxxx

    Where naa.xxxxxx is the NAA ID of the disk device and the disk devices are identified as per these options:

    • -s indicates an SSD.
    • -d indicates a capacity disk.
  7. Run the esxcli vsan storage list command to see the new disk group and verify that all disks are reporting True in CMMDS output.

Storage Policies on vSAN

Throughout the life cycle of the Virtual Machine, profile-driven storage allows the administrator to check whether its underlying storage is still compatible. The reason why this is useful is that if the VM is migrated to a different datastore for whatever reason, the administrator can ensure that it has moved to a datastore that continues to meet its requirements.

However, this not how it works in the case of VM storage policies.With vSAN, the storage quality of service no longer resides with the datastore; instead, it resides with the VM and is enforced by the VM storage policy associated with the VM and the VM disks (VMDKs). Once the policy is pushed down to the storage layer, in this case, vSAN, the underlying storage is then responsible for creating storage for the VM that meets the requirements placed in the policy.

This makes the life bit easy for the administrators.

Storage Policy-Based Management:

Deploying a VSAN datastore is different from the traditional way of mounting LUN from a storage unit or NAS /NFS devices.

In case of VSAN for better performance and availability of the VMs VSAN uses VM storage policy and a storage policy can be attached to individual VMs during deployment of the Virtual Machine.

You can select the capabilities when a VM storage policy is created.

Similarly, if an administrator wants a VM to tolerate two failures using a RAID-1 mirroring configuration, there would need to be three copies of the VMDK, meaning the amount of capacity consumed would be 300% the size of the VMDK. With a RAID-6 implementation, a double parity is implemented, which is also distributed across all the hosts. For RAID-6, there must be a minimum of six hosts in the cluster. RAID-6 also allows a VM to tolerate two failures, but only consumes capacity equivalent to 150% the size of the VMDK.

The number of Disk Stripes Per Object:

When failure tolerance method is set to capacity, each component of the RAID-5 or RAID-6 stripe may also be configured as a RAID-0 stripe.

Storage object configuration when stripe width set is to 2 and failures to tolerate is set to 1 and replication method optimizes for is not set.

Force Provisioning:

If the force provisioning parameter is set to a nonzero value, the object that has this setting in its policy will be provisioned even if the requirements specified in the VM storage policy cannot be satisfied by the vSAN datastore. The VM will be shown as noncompliant in the VM summary tab in and relevant VM storage policy views in the vSphere client. If there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, however, the provisioning will fail even if force provisioning is turned on. When additional resources become available in the cluster, vSAN will bring this object to a compliant state.

Administrators who use this option to force provision virtual machines, you need to understand that although the VM may be provisioned with one replica copy,vSAN may immediately consume these resources to try to satisfy the policy settings of virtual machines.

NOTE: This parameter should be used only when absolutely needed and as an exception.

VASA Vendor Provider:

This uses the vSphere APIs for Storage Awareness (VASA) to surface up the vSAN capabilities to the vCenter Server.

VASA allow storage vendors to publish the capabilities of their storage to vCenter Server, which in turn can display these capabilities in the vSphere Web Client. VASA may also provide information about storage health status, configuration info, capacity and thin provisioning info, and so on. VASA enable VMware to have an end-to-end story regarding storage.

With vSAN, and now VVols, you define the capabilities you want to have for your VM storage in a VM storage policy. This policy information is then pushed down to the storage layer, basically informing it that these are the requirements you have for storage. VASA will then tell you whether the underlying storage (e.g., vSAN) can meet these requirements, effectively communicating compliance information on a per-storage object basis.

Enabling VM Storage Policies

In the initial release of vSAN, VM storage policies could be enabled or disabled via the UI. This option is not available in later releases. However, VM storage policies are automatically enabled on a cluster when vSAN is enabled on the cluster. Although VM storage policies are normally only available with certain vSphere editions, a vSAN license will also provide this feature.

Assigning a VM Storage Policy During VM Provisioning

The assignment of a VM storage policy is done during the VM provisioning. At the point where the vSphere administrator must select a destination datastore, the appropriate policy is selected from the drop-down menu of available VM storage policies. The datastores are then separated into compatible and incompatible datastores, allowing the vSphere administrator to make the appropriate and correct choice for VM placement.

The vSAN datastore is shown as noncompliant when policy cannot be met.

Ruby vSphere Console (RVC): Part 1

The Ruby vSphere Console (RVC) is an interactive command-line console user interface for VMware vSphere and Virtual Center. The Ruby vSphere Console is based on the popular RbVmomi Ruby interface to the vSphere API.

The Ruby vSphere Console comes bundled with the vCenter Server Appliance (VCSA) and the Windows version of vCenter Server. It is free of charge and is fully supported. We recommend deploying a vCenter Server Appliance (minimum version 5.5u1b) to act as a dedicated server for the Ruby vSphere Console and Virtual SAN Observer. This will remove any potential performance or security issues from the primary production vCenter Server. After deploying the vCenter Server Appliance, no additional configuration is required to begin using the Ruby vSphere Console to manage your vSphere infrastructure. To begin using the Ruby vSphere Console, simply ssh to the dedicated vCenter Server Appliance and log in as a privileged user.

Advantages :

++ More detailed Virtual SAN insights vSphere Web Client

++ Cluster view of VSAN while esxcli can only offer host perspective

++ Mass operations via wildcards

++ Works against ESX host directly, even if VC is down

How do you set up ??

To begin using the Ruby vSphere Console to manage your vSphere infrastructure, simply deploy the vCenter Server Appliance and configure network connectivity for the appliance. SSH to the dedicated vCenter Server Appliance and log in as a privileged user. No additional configuration is required to begin.

KB: https://kb.vmware.com/s/article/2007619

Accessing and Logging In Below you will find the steps to log in and begin using the Ruby vSphere Console (RVC):

1. SSH to the VCSA dedicated to RVC and Virtual SAN Observer usage. login as: root VMware vCenter Server Appliance root@x.x.x.x password :

2. Login to the VCSA as a privileged OS user (e.g. root or custom privileged user).

3. Login to RVC using a privileged user from vCenter. Syntax: rvc [options] [username[:password]@]hostname

Eg:

vcsa:~ # rvc root@x.x.x.x
password:
0 /
1 x.x.x.x/
> cd 1
/localhost> ls
0 Test-Datacenter
/localhost>cd 0

I will then proceed by typing ‘cd 0’ and then ‘ls’ to view the contents of my ‘Test-Datacenter’ which will then provide me with 5 more options as seen below.

The vSphere environment is broken up into 5 areas:

++ Storage: vSphere Storage Profiles

++ Computers: ESXi Hosts

++ Networks: Networks and network components

++ Datastores: Datastores and datastore components

++ VMs: Virtual Machines and virtual machine components

/localhost/Test-Datacenter>ls
0 storage
1 computer [host]/
2 network [network]/
3 datastores [datastore]/
4 vms [vm]/
/localhost/Test-Datacenter>

You can use TAB twice to view the namespace

Once you login you can use help command to check the options (add the ‘-help’ parameter to the end of the command OR you can type ‘help’ followed by what is called the command ).For example :

help vsan
help cluster
help host
help vm

Viewing Virtual SAN Datastore Capacity: “show vsanDatastore” Here is an example of using “ls” to list out datastores within the infrastructure and then using “show” to obtain high-level information on the “vsanDatastore”. Notice the capacity and free space of the vsanDatastore.

/localhost/Test-Datacenter/datastores> ls
0 datastore1: 99.21GB 0.7%
1 datastore1 (1): 99.14GB 0.1%
2 vsanDatastore: 700.43GB 17.7% 
/localhost/Test-Datacenter/datastores> show vsanDatastore/
path: /localhost/Test-Datacenter/datastores/vsanDatastore
type: vsan url: ds:///vmfs/volumes/vsan:5207cb725036c9fc-3e560cb2fb96f36d/
multipleHostAccess: true
capacity: 700.43GB
free space: 684.14GB

Virtual SAN RVC Commands Overview: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-ruby-vsphere-console-command-reference-for-virtual-san.pdf

Page 14 onwards.

I am working on the rvc commands as of now, and I will update the details in my next post: Ruby vSphere Console (RVC): Part 2

VMware Site Recovery Manager 6.5

VMware Site Recovery Manager 6.5 is an exciting release for VMware. There are few cool features that make it easier to use and monitor SRM,also few integrations to VMware products. All of these improvements enhance what is already the premier virtualization BC/DR solution and provide additional value, time savings, security and risk reduction to customers.

vSphere 6.5 Compatibility:

+++++++++++++++++++
First off, SRM 6.5 is compatible with vSphere 6.5 including:

  • Full integration with the new vCenter HA feature. SRM will continue working normally if vCenter HA fails over
  • Full support for Windows vCenter to VCSA migration. If a customer uses the migration tool to upgrade and migrate their environment from vCenter 6.0 on Windows to vCenter 6.5 on the VCSA, from an SRM standpoint this is just seen as an upgrade and is fully compatible with a standard SRM upgrade
  • SRM supports protecting VMs that are using VM encryption when using Storage Policy-Based Protection Groups (SPPGs)
  • There is now support for Two-factor authentication such as RSA SecurID with SRM 6.5
    Integration with the new vSphere Guest Operations API. This means that changes to VM IP addresses and scripts running on VMs will now be even more secure

vSAN 6.5 Compatibility:

+++++++++++++++++

SRM 6.5 is fully supported and compatible with vSAN 6.5, as well as all previous vSAN versions, using vSphere Replication.

SRM and VVOLs interoperability:

++++++++++++++++++++++++

In addition to all the new cool features that are part of Virtual Volumes (VVOLs) 2.0, SRM 6.5 now supports protection of VMs located on VVOLs using vSphere Replication. If you are using storage other than vSAN you owe it to yourself to take a look at VVOLs and see what you can get out of using them.

API and vRO plug-in enhancements – Scripted/Unattended install and upgrade:

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++

There have been a number of enhancements to the programmatic interaction with SRM 6.5. These take a few forms, primarily: exposure of a number of new options in the SRM API and significant enhancements to the vRealize Orchestrator (vRO) plug-in for SRM to take advantage of these. The new functions available through the API and vRO plug-in are listed below. Follow these links for details on the existing API and vRO plug-in.

  • Add a Test Network Mapping
  • Get Test Network Mappings
  • Remove a Test Network Mapping
  • Remove Folder Mapping
  • Remove Network Mapping
  • Remove Resource Mapping
  • Remove Protection Group
  • Remove Replicated VM From VR Group
  • Unprotect Virtual Machines
  • Add Test Network Mapping to Recovery Plan
  • Create Recovery Plan
  • Delete Recovery Plan
  • Initiate Planned Migration Recovery Plan
  • Remove Protection Group From Recovery Plan
  • Remove Test Network Mapping From Recovery Plan
  • Discover Replicated Devices

These functions along with the functions previously exposed provide programmatic access to almost the full range of SRM functionality. This makes it that much easier to manage and maintain SRM programmatically, saving time and improving accuracy and efficiency. Additionally, SRM 6.5 now fully supports unattended/silent installation and upgrades. This makes the deployment of SRM much faster and easier, saving you time and money.

vSphere Replication RPO:

+++++++++++++++++++

Another exciting new enhancement for SRM is actually part of a related solution, vSphere Replication. vSphere Replication now supports VM RPOs of as low as a 5 min on most VMware compatible storage. This takes the RPO available with vSphere Replication to the point where it covers most use cases. And remember that vSphere Replication is included in vSphere Essentials Plus licensing and above.

vROps SRM Management Pack:

+++++++++++++++++++++++

Last but definitely not least, SRM 6.5 marks the first time that vROps has a management pack for monitoring SRM. This will allow for monitoring of the SRM server, Protection Group and Recovery Plan status from within vROps. This makes it easier to monitor, manage and troubleshoot SRM.

IOPS LIMIT FOR OBJECT

A number of customers have expressed a wish to limit the amount of I/O that a single VM can generate to a VSAN datastore. The main reason for this request is to prevent a high feed VM (or to be more precise, intense IOPS application inside in a VM) from impacting other VMs running on the same datastore. With the introduction of IOPS Limits, implemented via policies, administrators can limit the number of IOPS that a VM can do.

VSAN 6.2 has a new quality of service mechanism which we are referring to as “IOPS limit for object”. Through a policy setting, a customer can set an IOPS limit on a per object basis (typically VMDK) which will guarantee that the object will not be able to exceed this amount of IOPS. This is very useful if you have a virtual machine that might be consuming more than its required share of resources. This policy setting will ensure that there are “guard rails” placed on this virtual machine so it doesn’t impact other VMs, or impact the overall performance of the VSAN datastore.

The screenshot below shows what the new “IOPS limit for object” capability looks like in the VM Storage Policy. Simply select “IOPS limit for object” for your policy, and then select an integer value for the IOPS limit. Any object (VMDK) that has this policy assigned will not be able to generate more than that number of IOPS.

Normalized to 32KB:

++++++++++++++++

The IO size for IOPS Limit is normalized to 32KB. Note that this is a hard limit on the number of IOPS so even if you have enough resources available on the system to do more, this will prevent the VM/VMDK from doing so.

Considerations:

++++++++++++

One thing to consider is that not only is read and write I/O counted towards the limit, but also any snapshot I/O that occurs against the VM/VMDK is added to the IOPS limit.

If the I/O against a particular VM/VMDK rises about the IOPS Limit threshold, i.e. it is set to 10,000 IOPS and we receive the 10,001st I/O, then that I/O is delayed/throttled.

Object Store File System

OSFS (Object Store File System) enables VMFS volumes to be mounted as a single datastore for each host. Data on a VSAN datastore is stored in the form of data containers called objects, which is distributed across the cluster. An object can be a vmdk file, a snapshot, or the VM home folder. A reference object is also created and holds a VMFS volume and stores the virtual machine metadata files.

Logs for OSFS are captured in : /var/log/osfsd.log

How would you make a change in vSAN datastore with osfs-mkdir and osfs-rmdir?

It is not easy to create and remove any directory form VSAN  because the vSAN datastore is object based. In order to do that, special vSAN related commands exist to perform such tasks.

If you try and create a directory on VSAN you might get and error as :

# cd /vmfs/volumes/vsanDatastore

# mkdir TEST

mkdir: can’t create directory ‘TEST’: Function not implemented

How can we create/remove a directory ?

Step 1: Login to Esxi and access the folder …/bin/

# cd /usr/lib/vmware/osfs/bin

Step 2: List the contents 

# ls

objtool     osfs-ls     osfs-mkdir  osfs-rmdir  osfsd

Step 3: Verify that a directory called TEST does not exist

# ls -lh /vmfs/volumes/vsanDatastore/TEST

ls: /vmfs/volumes/vsanDatastore/TEST: No such file or directory

Step 4: Let’s create a Directory with the name TEST using osfs-mkdir

# ./osfs-mkdir /vmfs/volumes/vsanDatastore/TEST

54c0ba65-0c45-xxxx-b1f2-xxxxxxxxxxxx

Step 5: Verify that it exists

# ls -lh /vmfs/volumes/vsanDatastore/TEST

lrwxr-xr-x    1 root     root          12 Jan 09 21:03 /vmfs/volumes/vsanDatastore/TEST ->54c0ba65-0c45-xxxx-b1f2-xxxxxxxxxxxx

Step 6: Let’s try to Delete the directory now using osfs-rmdir

# ./osfs-rmdir /vmfs/volumes/vsanDatastore/TEST

Deleting directory 54c0ba65-0c45-xxxx-b1f2-xxxxxxxxxxxx in container id xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

backed by vsan (force=False)

Step 7: Verify that it has been removed

# ls -lh /vmfs/volumes/vsanDatastore/TEST

ls: /vmfs/volumes/vsanDatastore/TEST: No such file or directory