What is Resync of Objects in VSAN?

You can monitor the status of virtual machine objects that are being resynchronized in the Virtual SAN cluster.

The following events trigger resynchronization in the cluster:

  • Editing a virtual machine (VM) storage policy. When you change VM storage policy settings, Virtual SAN might initiate object recreation and subsequent resynchronization of the objects.
    Certain policy changes might cause Virtual SAN to create another version of an object and synchronize it with the previous version. When the synchronization is complete, the original object is discarded.
    Virtual SAN ensures that VMs continue to run and are not interrupted by this process. This process might require additional temporary capacity.
  • Restarting a host after a failure.
  • Recovering hosts from a permanent or long-term failure. If a host is unavailable for more than 60 minutes (by default), Virtual SAN creates copies of data to recover the full policy compliance.
  • Evacuating data by using the Full data migration mode before you place a host in maintenance mode.
  • Exceeding the utilization threshold of a capacity device. Resynchronization is triggered when capacity device utilization in the Virtual SAN cluster that approaches or exceeds the threshold level of 80 percent.

To evaluate the status of objects that are being resynchronized, you can monitor the resynchronization tasks that are currently in progress.

Prerequisites:

Verify that hosts in your Virtual SAN cluster are running ESXi 6.0 or later.

Procedure:

  1. Navigate to the Virtual SAN cluster in the vSphere Web Client.
  2. Select the Monitor tab and click Virtual SAN.
  3. Select Resyncing Components to track the progress of resynchronization of virtual machine objects and the number of bytes that are remaining before the resynchronization is complete.You can also view information about the number of objects that are currently being synchronized in the cluster, the estimated time to finish the resynchronization, the time remaining for the storage objects to fully comply with the assigned storage policy, and so on.If your cluster has connectivity issues, the data on the Resyncing Components page might not get refreshed as expected and the fields might reflect inaccurate information.

Best practices to upgrade VSAN

Before you attempt to upgrade vSAN, verify that your environment meets the vSphere hardware and software requirements.

1: Verify that vSAN 6.6 supports the software and hardware components, drivers, firmware, and storage I/O controllers that you plan on using. Supported items are listed on the VMware Compatibility Guide website at http://www.vmware.com/resources/compatibility/search.php.

2: Verify that you have enough space available to complete the software version upgrade. The amount of disk storage needed for the vCenter Server installation depends on your vCenter Server configuration.

3: Verify that you have enough capacity available to upgrade the disk format. If free space equal to the consumed capacity of the largest disk group is not available, with the space available on disk groups other than the disk groups that are being converted, you must choose Allow reduced redundancy as the data migration option.

4: Verify that you have placed the vSAN hosts in maintenance mode and selected the Ensure data accessibility or Evacuate all data option.

5: Verify that you have backed up your virtual machines.

6: All vSAN disks should be healthy

      >> No disk should be failed or absent

      >> This can be determined via the vSAN Disk Management view in the vSphere Web Client.

7: There should be no inaccessible vSAN objects

     >>Use RVC to determine this: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-ruby-vsphere-console-command-reference-for-virtual-san.pdf

8: There should not be any active resync at the start of the upgrade process.

After starting a vSAN Cluster upgrade :

1: Do not attempt to upgrade a cluster by introducing new versions to the cluster and migrating workloads.

2: If you are adding or replacing disks in the midst of an upgrade, ensure that they are formatted with the appropriate legacy on-disk format version : How to format vSAN Disk Groups with a legacy format version (2146221).

Recommendations:

Consider the following recommendations when deploying ESXi hosts for use with vSAN:

1: If ESXi hosts are configured with memory capacity of 512 GB or less, use SATADOM, SD, USB, or hard disk devices as the installation media.
2: If ESXi hosts are configured with memory capacity greater than 512 GB, use a separate magnetic disk or flash device as the installation device. If you are using a separate device, verify that vSAN is not claiming the device.
3: When you boot a vSAN host from a SATADOM device, you must use a single-level cell (SLC) device and the size of the boot device must be at least 16 GB.
4:To ensure your hardware meets the requirements for vSAN, refer to Hardware Requirements for vSAN.

Please check the vSAN upgrade requirements (2145248) : https://kb.vmware.com/s/article/2145248

Remove Disk Partition on VSAN

If you have added a device that contains residual data or partition information, you must remove all preexisting partition information from the device before you can claim it for vSAN use. VMware recommends adding clean devices to disk groups.

When you remove partition information from a device, vSAN deletes the primary partition that includes disk format information and logical partitions from the device.

Prerequisites:

+++++++++++

Verify that the device is not in use by ESXi as boot disk, VMFS datastore, or vSAN.

Procedure:

++++++++

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Disk Management.
  4. Select a host to view the list of available devices.
  5. From the Show drop-down menu at the bottom of the page, select Ineligible.
  6. Select a device from the list, and click the Erase partitions on the selected disks icon
  7. Click OK to confirm.

If vCenter Server is inAccessible from vSphere Web Client below are the steps to manually remove and recreate a vSAN disk group using the ESX Command Line Interface (esxcli).

To remove and recreate a disk group using esxcli commands:

Note: These steps can be data destructive if not followed carefully.

  1. Log in to the ESXi host that owns the disk group as the root user using SSH.
  2. Run this one of these commands to put the host in Maintenance mode. There are 3 options:

    Note: VMware recommends using the ensureObjectAccessibility option. Failure to use this ensureObjectAccessibility mode or evacuateAllData mode may result in data loss.

    • Recommended:
      • Ensure accessibility of data:
        esxcli system maintenanceMode set –enable true -m ensureObjectAccessibility
      • Evacuate data:
        esxcli system maintenanceMode set –enable true -m evacuateAllData
    • Not recommended:
      • Don’t evacuate data:
        esxcli system maintenanceMode set –enable true -m noAction
  3. Record the cache and capacity disk ids in the existing group by running this command:
    esxcli vsan storage list

    Example output of a capacity tier device:
    naa.123456XXXXXXXXXXX:
    Device: naa.123456XXXXXXXXXXX
    Display Name: naa.123456XXXXXXXXXXX
    Is SSD: true
    VSAN UUID: xxxxxxxx-668b-ec68-b632-xxxxxxxxxxxx
    VSAN Disk Group UUID: xxxxxxxx-17c6-6f42-d10b-xxxxxxxxxxxx
    VSAN Disk Group Name: naa.50000XXXXX1245
    Used by this host: true
    In CMMDS: true
    On-disk format version: 5
    Deduplication: true
    Compression: true
    Checksum: 598612359878654763
    Checksum OK: true
    Is Capacity Tier: true
    Encryption: false
    DiskKeyLoaded: false

    Note: For a cache disk:

    • the VSAN UUID and VSAN Disk Group UUID fields will match
    • Output will report: Is Capacity Tier: false
  4. Then remove the disk group

    esxcli vsan storage remove -u uuid

    Note: Always double check the disk group UUID with the command:
    esxcli vsan storage list

  5. If you have replaced physical disks, see the Additional Information section.
  6. Create the disk group, using this command:
    esxcli vsan storage add -s naa.xxxxxx -d naa.xxxxxxx -d naa.xxxxxxxxxx -d naa.xxxxxxxxxxxx

    Where naa.xxxxxx is the NAA ID of the disk device and the disk devices are identified as per these options:

    • -s indicates an SSD.
    • -d indicates a capacity disk.
  7. Run the esxcli vsan storage list command to see the new disk group and verify that all disks are reporting True in CMMDS output.

Storage Policies on vSAN

Throughout the life cycle of the Virtual Machine, profile-driven storage allows the administrator to check whether its underlying storage is still compatible. The reason why this is useful is that if the VM is migrated to a different datastore for whatever reason, the administrator can ensure that it has moved to a datastore that continues to meet its requirements.

However, this not how it works in the case of VM storage policies.With vSAN, the storage quality of service no longer resides with the datastore; instead, it resides with the VM and is enforced by the VM storage policy associated with the VM and the VM disks (VMDKs). Once the policy is pushed down to the storage layer, in this case, vSAN, the underlying storage is then responsible for creating storage for the VM that meets the requirements placed in the policy.

This makes the life bit easy for the administrators.

Storage Policy-Based Management:

Deploying a VSAN datastore is different from the traditional way of mounting LUN from a storage unit or NAS /NFS devices.

In case of VSAN for better performance and availability of the VMs VSAN uses VM storage policy and a storage policy can be attached to individual VMs during deployment of the Virtual Machine.

You can select the capabilities when a VM storage policy is created.

Similarly, if an administrator wants a VM to tolerate two failures using a RAID-1 mirroring configuration, there would need to be three copies of the VMDK, meaning the amount of capacity consumed would be 300% the size of the VMDK. With a RAID-6 implementation, a double parity is implemented, which is also distributed across all the hosts. For RAID-6, there must be a minimum of six hosts in the cluster. RAID-6 also allows a VM to tolerate two failures, but only consumes capacity equivalent to 150% the size of the VMDK.

The number of Disk Stripes Per Object:

When failure tolerance method is set to capacity, each component of the RAID-5 or RAID-6 stripe may also be configured as a RAID-0 stripe.

Storage object configuration when stripe width set is to 2 and failures to tolerate is set to 1 and replication method optimizes for is not set.

Force Provisioning:

If the force provisioning parameter is set to a nonzero value, the object that has this setting in its policy will be provisioned even if the requirements specified in the VM storage policy cannot be satisfied by the vSAN datastore. The VM will be shown as noncompliant in the VM summary tab in and relevant VM storage policy views in the vSphere client. If there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, however, the provisioning will fail even if force provisioning is turned on. When additional resources become available in the cluster, vSAN will bring this object to a compliant state.

Administrators who use this option to force provision virtual machines, you need to understand that although the VM may be provisioned with one replica copy,vSAN may immediately consume these resources to try to satisfy the policy settings of virtual machines.

NOTE: This parameter should be used only when absolutely needed and as an exception.

VASA Vendor Provider:

This uses the vSphere APIs for Storage Awareness (VASA) to surface up the vSAN capabilities to the vCenter Server.

VASA allow storage vendors to publish the capabilities of their storage to vCenter Server, which in turn can display these capabilities in the vSphere Web Client. VASA may also provide information about storage health status, configuration info, capacity and thin provisioning info, and so on. VASA enable VMware to have an end-to-end story regarding storage.

With vSAN, and now VVols, you define the capabilities you want to have for your VM storage in a VM storage policy. This policy information is then pushed down to the storage layer, basically informing it that these are the requirements you have for storage. VASA will then tell you whether the underlying storage (e.g., vSAN) can meet these requirements, effectively communicating compliance information on a per-storage object basis.

Enabling VM Storage Policies

In the initial release of vSAN, VM storage policies could be enabled or disabled via the UI. This option is not available in later releases. However, VM storage policies are automatically enabled on a cluster when vSAN is enabled on the cluster. Although VM storage policies are normally only available with certain vSphere editions, a vSAN license will also provide this feature.

Assigning a VM Storage Policy During VM Provisioning

The assignment of a VM storage policy is done during the VM provisioning. At the point where the vSphere administrator must select a destination datastore, the appropriate policy is selected from the drop-down menu of available VM storage policies. The datastores are then separated into compatible and incompatible datastores, allowing the vSphere administrator to make the appropriate and correct choice for VM placement.

The vSAN datastore is shown as noncompliant when policy cannot be met.

Rubrik Cloud Data Management platform

Rubrik is the world’s first Cloud Data Management platform that delivers data protection, search, analytics, and copy data management to hybrid cloud enterprises.

Why Rubrik ???

1: Rubrik will allow us to provide an effective backup solution for cloud clients.You find the idea of being able to dynamically set SLA’s to your VM’s across both private and public clouds.

2: Can spend less time focusing on building backup infrastructure and more time designing and maintaining solutions

3: The ease of deployment and amazing user interface for customers is a key differentiator in the market.

4: Zero time restores, granular recovery to the file level, and Google-like predictive search!!!! Whaooooo.

     The best part of Rubrik is that you reduce your RPO. The recovery of a Microsoft Exchange database took only minutes with Rubrik. Before it would have taken 12 hours to restore and then we would have had to mount the database with power tools that we purchased ourselves. The Google-like Rubrik file restores have been a life saver many times

5: Native Immutability to Fight Ransomware: Recover from ransomware with no data loss with immutable backups built into the platform. Resume business within minutes of an attack. Instantly search and recover files to any point-in-time, on-premises or in the cloud. No ransom, no downtime.

How do Rubrik make backups interesting and simple?

Add vCenter :

1. Log in to the web UI.
2. Click the gear icon on the top right of the web UI.
The Settings menu appears.


3. From the Settings menu, select vCenter servers.
The Virtual Infrastructure page appears.


4. Click the blue + icon.
The Add vCenter dialog box appears.
5. In vCenter IP, type the resolvable hostname or IPv4 address of the vCenter Server.


6. In vCenter Username, type the username assigned to the Rubrik cluster.
7. In vCenter Password, type the password assigned to the Rubrik cluster.
8. Click Add.

So, once you add the vCenter you can check the VM list under Virtual Machine Tab :

Now we have to create SLA and add them to the VMs to initiate Backups.

Let us try to create an SLA now:

1. On the left pane of the Rubrik web UI, select SLA Domains > Local Domains.


2. Click the blue + icon.
The Create New SLA Domain dialog box appears

3. In SLA Domain Name, type a name for the new SLA Domain.Also, if you want to add the archival and replication setting click the “Configure Remote Settings “button on the same tab after you define the SLA.

4: You can see a new tab of “Remote Storage Configuration”

>> You can use your archival location and a retention policy as you wish to 🙂

>> Also the replication target:

>> Change the retention of archival or replication by adjusting the scroll bar :

 

 

 

 

 

 

 

 

 

 

 

 

 

4: We are ready to go now. Hit “Create SLA”

 

Now let us see how to add SLA to a VM :

1: Click on the VM and you can see a page like this :

You have options:

2: Click on “Manage Protection”

3: Click on  “Submit”

4: I took an on-demand snapshot.It is the first backup, so you might see a “warning ” at the bottom of the same page :

Few, moments and you see the progress :

And it completes in no time (First full back up might take some time depending on the size of VMDK )

 

So, it’s simple and very easy to perform backups now.

You are not a big fan of complexity than Rubrik is the best product to work with  🙂

Ruby vSphere Console (RVC): Part 1

The Ruby vSphere Console (RVC) is an interactive command-line console user interface for VMware vSphere and Virtual Center. The Ruby vSphere Console is based on the popular RbVmomi Ruby interface to the vSphere API.

The Ruby vSphere Console comes bundled with the vCenter Server Appliance (VCSA) and the Windows version of vCenter Server. It is free of charge and is fully supported. We recommend deploying a vCenter Server Appliance (minimum version 5.5u1b) to act as a dedicated server for the Ruby vSphere Console and Virtual SAN Observer. This will remove any potential performance or security issues from the primary production vCenter Server. After deploying the vCenter Server Appliance, no additional configuration is required to begin using the Ruby vSphere Console to manage your vSphere infrastructure. To begin using the Ruby vSphere Console, simply ssh to the dedicated vCenter Server Appliance and log in as a privileged user.

Advantages :

++ More detailed Virtual SAN insights vSphere Web Client

++ Cluster view of VSAN while esxcli can only offer host perspective

++ Mass operations via wildcards

++ Works against ESX host directly, even if VC is down

How do you set up ??

To begin using the Ruby vSphere Console to manage your vSphere infrastructure, simply deploy the vCenter Server Appliance and configure network connectivity for the appliance. SSH to the dedicated vCenter Server Appliance and log in as a privileged user. No additional configuration is required to begin.

KB: https://kb.vmware.com/s/article/2007619

Accessing and Logging In Below you will find the steps to log in and begin using the Ruby vSphere Console (RVC):

1. SSH to the VCSA dedicated to RVC and Virtual SAN Observer usage. login as: root VMware vCenter Server Appliance root@x.x.x.x password :

2. Login to the VCSA as a privileged OS user (e.g. root or custom privileged user).

3. Login to RVC using a privileged user from vCenter. Syntax: rvc [options] [username[:password]@]hostname

Eg:

vcsa:~ # rvc root@x.x.x.x
password:
0 /
1 x.x.x.x/
> cd 1
/localhost> ls
0 Test-Datacenter
/localhost>cd 0

I will then proceed by typing ‘cd 0’ and then ‘ls’ to view the contents of my ‘Test-Datacenter’ which will then provide me with 5 more options as seen below.

The vSphere environment is broken up into 5 areas:

++ Storage: vSphere Storage Profiles

++ Computers: ESXi Hosts

++ Networks: Networks and network components

++ Datastores: Datastores and datastore components

++ VMs: Virtual Machines and virtual machine components

/localhost/Test-Datacenter>ls
0 storage
1 computer [host]/
2 network [network]/
3 datastores [datastore]/
4 vms [vm]/
/localhost/Test-Datacenter>

You can use TAB twice to view the namespace

Once you login you can use help command to check the options (add the ‘-help’ parameter to the end of the command OR you can type ‘help’ followed by what is called the command ).For example :

help vsan
help cluster
help host
help vm

Viewing Virtual SAN Datastore Capacity: “show vsanDatastore” Here is an example of using “ls” to list out datastores within the infrastructure and then using “show” to obtain high-level information on the “vsanDatastore”. Notice the capacity and free space of the vsanDatastore.

/localhost/Test-Datacenter/datastores> ls
0 datastore1: 99.21GB 0.7%
1 datastore1 (1): 99.14GB 0.1%
2 vsanDatastore: 700.43GB 17.7% 
/localhost/Test-Datacenter/datastores> show vsanDatastore/
path: /localhost/Test-Datacenter/datastores/vsanDatastore
type: vsan url: ds:///vmfs/volumes/vsan:5207cb725036c9fc-3e560cb2fb96f36d/
multipleHostAccess: true
capacity: 700.43GB
free space: 684.14GB

Virtual SAN RVC Commands Overview: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-ruby-vsphere-console-command-reference-for-virtual-san.pdf

Page 14 onwards.

I am working on the rvc commands as of now, and I will update the details in my next post: Ruby vSphere Console (RVC): Part 2

UCS LSI Adapter Beeps Continuously for the UCS C-Series Rack servers

A Cisco UCS server beeps continuously; this beep originates from the LSI RAID adapter.

You can view the LSI MegaRAID Card Beep Codes in order to identify the specific alarm.

If there are no failed drives reported by the Cisco Integrated Management Controller (CIMC), this problem might occur due to an issue described in this LSI MegaRaid firmware bug: LSIP200139764.

Note: TheLSI defect is reported as fixed in MegaRaid firmware 21.0.1-0111 and 21.0.1-0110. This issue affects any server vendor that uses these adapters (for example, Cisco, HP, and Dell).

You can silence the alarm in the LSI MegaRAID BIOS Config Utility. However, be aware that this process requires a server reboot in order to access the WebBIOS. The alarm control is located in Controller Properties.

Also ,if you want to avoid an LSI firmware upgrade, you can complete these steps in order to resolve this issue:

  1. Install MegaCLI. Refer to the LSI Documents & Downloadspage in order to locate documentation on this procedure.
  2. Run this command in MegaCLI in order to silence the alarm:
    # MegaCli -AdpSetProp -AlarmSilence -aALL

LSI MegaRAID Card Beep Codes:

These beep codes indicate activity and changes from the optimal state of your RAID array. For full documentation on the LSI MegaRAID cards and the LSI utilities, refer to the LSI documentation for your card.

I hope this helps.

Troubleshoot Memory issues in Cisco UCS Box

Type of errors that we find in Memory :

    • DIMM Error
      • ECC(Error Correcting Code) Error
        • Multibit = Uncorrectable
          • POST it is mapped out by BIOS, OS does not see DIMM
          • Runtime usually causes OS reboot
        • Singlebit = Correctable
          • OS continues to see memory, performance could degrade
      • Parity Error
      • SPD (Serial Presence Detect) Error
    • Configuration Error
      • Unpaired DIMMs
      • Mismatch errors
        • Not supported DIMMs
        • Not supported DIMM population
    • Identity unestablishable error
      • Check and update the catalog

We need to understand what is a Correctable and Uncorrectable error in order to troubleshoot on any Memory related issues on UCS box.

Whether a particular error is correctable or uncorrectable depends on the strength of the ECC code employed within the memory system. Dedicated hardware is able to fix correctable errors when they occur with no impact on program execution.

The DIMMs with correctable error are not disabled and are available for the OS to use. The Total Memory and Effective Memory be the same (taking memory mirroring into account). These correctable errors reported in UCSM operability state as Degraded while overall operability Operable with correctable errors.

Uncorrectable errors generally cannot be fixed, and may make it impossible for the application or operating system to continue execution. The DIMMs with uncorrectable error is disabled and OS does not see that memory. UCSM operState change to “”Inoperable”” in this case.

To Check Errors from CLI

These commands are useful when troubleshooting errors from CLI.

scope server x/y -> show memory detail
scope server x/y -> show memory-array detail
scope server x/y -> scope memory-array x -> show stats history memory-array-env-stats detail

From memory array scope you can also get access to DIMM.

scope server X/Y > scope memory-array Z > scope DIMM N

From there then you can obtain per-DIMM statistics or reset the error counters.

bdsol-6248-06-B /chassis/server/memory-array/dimm # reset-errors                
bdsol-6248-06-B /chassis/server/memory-array/dimm* # commit-buffer               
bdsol-6248-06-B /chassis/server/memory-array/dimm # show stats memory-error-state

If you see a correctable error reported that matches the information above, the problem can be corrected by resetting the BMC instead of reseating or resetting the blade server. Use these Cisco UCS Manager CLI commands:

Resetting the BMC does not impact the OS running on the blade.

UCS1-A# scope server x/y
UCS1-A /chassis/server # scope bmc
UCS1-A /chassis/server/bmc # reset
UCS1-A /chassis/server/bmc* # commit-buffer

With UCSM releases 3.1 and 2.2.7, the thresholds for memory corrected errors have been removed.

Therefore, memory modules (DIMM) shall no longer be reported as “Inoperable” or “Degraded” solely due to corrected memory errors.

As per whitepaper http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-manager/whitepaper-c11-736116.pdf

Industry demands for greater capacity, greater bandwidth, and lower operating voltages lead to increased memoryerror rates. Traditionally, the industry has treated correctable errors in the same way as uncorrectable errors, requiring the module to be replaced immediately upon alert. Given extensive research that correctable errors are not correlated with uncorrectable errors, and that correctable errors do not degrade system performance, the Cisco UCS team recommends against immediate replacement of modules with correctable errors. Customers who experience a Degraded memory alert for correctable errors should reset the memory error and resume operation. If you follow this recommendation, it avoids unnecessary server disruption. Future enhancements to error management are coming and  helps distinguish among various types of correctable errors and identify the appropriate actions, if any, needed.

It is recommended to be minimum of version 2.1(3c) or 2.2(1b) which has enhancement with UCS memory error management

Methods to Clear DIMM Blacklisting Errors:

UCSM GUI

UCSM CLI

UCS-B/chassis/server # reset-all-memory-errors

If the above troubleshooting did not help please feel free to raise a support request for assistance.

 

6.5 FT – What’s New ??

vSphere 6.5 Fault Tolerance also introduced with a lot of new improvements especially with improved latency handling with the release of vSphere 6.5.

Benefits of VMware Fault Tolerance

  • Continuous Availablity with Zero downtime and Zero data loss
  • NO TCP connections loss during failover
  • Fault Tolerance is completely transparent to Guest OS.
  • FT doesn’t depend on Guest OS and application
  • Instantaneous Failover from Primary VM to Secondary VM in case of ESXi host failure

DRS Integration improved with Fault Tolerance:

+++++++++++++++++++++++++++++++++++++++++++++

vSphere 6.5 Fault Tolerance (FT) has more integration with DRS, which helps the better placement decisions by ranking the ESXi hosts based on the available network bandwidth as also it recommends the datastore to place the Secondary VM VMDK files.

As We already talked about Network Aware DRS in vSphere 6.5 HA & DRS article. With Network Aware DRS, DRS will now consider the network bandwidth on the ESXi hosts before placing new Virtual machines on the ESXi host and also during VM migrations to the ESXi host. DRS will check Tx and Rx rates of the connected physical uplinks and avoids placing VMs on hosts that are greater than 80% utilized. It uses the network utilization information as the additional check to determine the destination ESXi is suitable for VM placement and migration. This option improves the DRS placement decisions.

Lower Network Latency:

+++++++++++++++++++++++

There are more efforts put on vSphere 6.5 Fault Tolerance to lower the network latency. This will improve the performance to impact to certain types of applications that were sensitive to the additional latency first introduced with vSphere 6.0 FT. This improvement gives more opportunity to Protect mission-critical applications using Fault Tolerance.

Multi-NIC Fault Tolerance Support:

+++++++++++++++++++++++++++++++++

You can now configured FT networks to use multiple Network adapters (NICs) to increase the overall bandwidth for FT logging traffic. This works similar to the Multi-NIC vMotion provides more bandwidth for FT network.

And the difference between 5.5 and 6.0: