NEW FEATURES IN VSAN 6.6.1

Integration with VUM, VMware Update Manager

vSAN 6.6.1 is fully integrated with VUM to provide a powerful new update process that makes sure your vSAN cluster is up to date.

vSAN Performance Diagnostics:

The feature offers advice such as increasing a stripe width, adding more VMs or bringing more disk groups into use to improve performance. The feature needs the Customer Experience Improvement Program (CEIP) to be enabled, as well as the vSAN Performance Service. Once these have been configured, and the diagnostic information has been gathered for approx. 1 hour, benchmark results can then be examined to see if the maximum number of IOPS, maximum throughtput or minimum latency has been achieved. If not, guidance is provided on how best to achieve these objectives in a benchmark run.



Improved vCenter and unicast behaviour for cluster membership:

If vCenter is down and changes are then made to the cluster, when vCenter recovers, it may reset those cluster changes(UUID ). In vSAN 6.6.1, there is a new property for vSAN called “configuration generation”. This addresses the issue of tracking vSAN membership and will avoid the issue.

New health checks for Update Manager (VUM)
 

The health check has been updated to include checks for new features, such as VUM.

Change the EVC Mode for a Cluster

Verify :

Verify that all hosts in the cluster have supported CPUs for the EVC mode you want to enable. See http://kb.vmware.com/kb/1003212 for a list of supported CPUs.
Verify that all hosts in the cluster are connected and registered on vCenter Server. The cluster cannot contain a disconnected host.
Virtual machines must be in the following power states, depending on whether you raise or lower the EVC mode.

EVC Mode
Virtual Machine Power Action
Raise the EVC mode to a CPU baseline with more features.
Running virtual machines can remain powered on. New EVC mode features are not available to the virtual machines until they are powered off and powered back on again. A full power cycling is required. Rebooting the guest operating system or suspending and resuming the virtual machine is not sufficient.
Lower the EVC mode to a CPU baseline with fewer features.
Power off virtual machines if they are powered on and running at a higher EVC Mode than the one you intend to enable.
Procedure:

1.Select a cluster in the inventory.
2.Click the Manage tab and click Settings.
3.Select VMware EVC and click Edit.
4.Select whether to enable or disable EVC.
Option
Description
Disable EVC
The EVC feature is disabled. CPU compatibility is not enforced for the hosts in this cluster.
Enable EVC for AMD Hosts
The EVC feature is enabled for AMD hosts.
Enable EVC for Intel Hosts
The EVC feature is enabled for Intel hosts.

1.From the VMware EVC Mode drop-down menu, select the baseline CPU feature set that you want to enable for the cluster.
2.If you cannot select the EVC Mode, the Compatibility pane displays the reason, and the relevant hosts for each reason.
Click OK.

About this task:

Several EVC approaches are available to ensure CPU compatibility:

1.If all the hosts in a cluster are compatible with a newer EVC mode, you can change the EVC mode of an existing EVC cluster.
2.You can enable EVC for a cluster that does not have EVC enabled.
3.You can raise the EVC mode to expose more CPU features.
4.You can lower the EVC mode to hide CPU features and increase compatibility.

Limits on Simultaneous Migrations

vCenter Server places limits on the number of simultaneous virtual machine migration and provisioning operations that can occur on each host, network, and datastore. Each operation, such as a migration with vMotion or cloning a virtual machine, is assigned a resource cost. Each host, datastore, or network resource, has a maximum cost that it can support at any one time. Any new migration or provisioning operation that causes a resource to exceed its maximum cost does not proceed immediately, but is queued until other operations complete and release resources. Each of the network, datastore, and host limits must be satisfied for the operation to proceed.

Network Limits:

Network limits apply only to migrations with vMotion. Network limits depend on the version of ESXi and the network type. All migrations with vMotion have a network resource cost of 1.

Network Limits for Migration with vMotion
Operation
ESXi Version
Network Type
Maximum Cost
vMotion
5.0, 5.1, 5.5, 6.0
1GigE
4
vMotion
5.0, 5.1, 5.5, 6.0
10GigE
8

Datastore Limits:

Datastore limits apply to migrations with vMotion and with Storage vMotion. A migration with vMotion has a resource cost of 1 against the shared virtual machine’s datastore. A migration with Storage vMotion has a resource cost of 1 against the source datastore and 1 against the destination datastore.

Datastore Limits and Resource Costs for vMotion and Storage vMotion
Operation ESXi Version Maximum Cost Per Datastore Datastore Resource Cost
vMotion
5.0, 5.1, 5.5, 6.0
128
1
Storage vMotion
5.0, 5.1, 5.5, 6.0
128
16

Host Limits: >>

Host limits apply to migrations with vMotion, Storage vMotion, and other provisioning operations such as cloning, deployment, and cold migration. >> All hosts have a maximum cost per host of 8. For example, on an ESXi 5.0 host, you can perform 2 Storage vMotion operations, or 1 Storage vMotion and 4 vMotion operations.

Host Migration Limits and Resource Costs for vMotion, Storage vMotion, and Provisioning Operations
Operation
ESXi Version
Derived Limit Per Host
Host Resource Cost
vMotion
5.0, 5.1, 5.5, 6.0
8
1
Storage vMotion
5.0, 5.1, 5.5, 6.0
2
4
vMotion Without Shared Storage
5.1, 5.5, 6.0
2
4
Other provisioning operations
5.0, 5.1, 5.5, 6.0
8
1

 

Understanding VVols

Virtual Volumes (VVols) is a new integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. Virtual Volumes simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed. It simplifies the delivery of storage service levels to individual applications by providing finer control of hardware resources and native array-based data services that can be instantiated with virtual machine granularity.

With Virtual Volumes (VVols), VMware offers a new paradigm in which an individual virtual machine and its disks, rather than a LUN, becomes a unit of storage management for a storage system.Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.

Overview:
++++++++++++++

Virtual Volumes (VVols) are VMDK granular storage entities exported by storage arrays. Virtual volumes are exported to the ESXi host through a small set of protocol end-points (PE). Protocol Endpoints are part of the physical storage fabric, and they establish a data path from virtual machines to their respective virtual volumes on demand. Storage systems enables data services on virtual volumes. The results of these data services are newer virtual volumes. Data services, configuration and management of virtual volume systems is exclusively done out-of-band with respect to the data path. Virtual volumes can be grouped into logicaly and are called storage containers (SC) for management purposes.

Virtual volumes (VVols) and Storage Containers (SC) form the virtual storage fabric. Protocol Endpoints (PE) are part of the physical storage fabric.

By using a special set of APIs called vSphere APIs for Storage Awareness (VASA), the storage system becomes aware of the virtual volumes and their associations with the relevant virtual machines. Through VASA, vSphere and the underlying storage system establishes a two-way out-of-band communication to perform data services and offload certain virtual machine operations to the storage system. For example, operations such as snapshots and clones can be offloaded.

For in-band communication with Virtual Volumes storage systems, vSphere continues to use standard SCSI and NFS protocols. This results in support with Virtual Volumes for any type of storage that includes iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE), and NFS.

 

    • Virtual Volumes represent virtual disks of a virtual machine as abstract objects identified by 128-bit GUID, managed entirely by Storage hardware.
    • Model changes from managing space inside datastores to managing abstract storage objects handled by storage arrays.
    • Storage hardware gains complete control over virtual disk content, layout and management.

WORKFLOW:

 

Important Things to be noted:

+++++++++++++++++++++++

  • Storage provider (SP): The storage provider acts as the interface between the hypervisor and the external array. It is implemented out-of-band (it is not data path) and uses the existing VASA (vSphere APIs for Storage Awareness) API protocol. The storage provider also consists of information, such as details on VVOLs and storage containers. VVOLs requires the support of VASA 2.0, released with vSphere 6.

Storage container (SC): This is  configured on the external storage appliance. The specific implementation of the storage container will vary between storage Vendors, although most of the vendors allow physical storage to be aggregated into pools from which logical volumes  can be created.

  • Protocol endpoint (PE): It acts as a middle man between VVOLs and hypervisor and is implemented as a traditional LUN on block-based systems, although it stores no actual data (dummy LUN). The protocol endpoint has also been described as an I/O de-multiplexer, because it is a pass-through mechanism that allows access to VVOLs bound to it. For example the gate keeper LUN in EMC VMAX array.ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the Protocol Endpoint (PE), to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
  • VVols Objects:

+++++++++++++

A virtual datastore represents a storage container in vCenter Server and the vSphere Web Client. Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives.

      • Virtual machine objects stored natively on the array storage containers.
      • There are five different types of recognized Virtual Volumes:
  • Config-VVol – Metadata
  • Data-VVol – VMDKs
  • Mem-VVol – Snapshots
  • Swap-VVol – Swap files
  • Other-VVol – Vendor solution specific

Follow these guidelines when using Virtual Volumes:

  • Because the Virtual Volumes environment requires the vCenter Server, you cannot use Virtual Volumes with a standalone ESXi host.
  • Virtual Volumes does not support Raw Device Mappings (RDMs).
  • A Virtual Volumes storage container cannot span across different physical arrays.
  • Host profiles that contain virtual datastores are vCenter Server specific. After you extract this type of host profile, you can attach it only to hosts and clusters managed by the same vCenter Server as the reference host.

Key benefits of Virtual Volumes:

++++++++++++++++++++++++++

  • Operational transformation with Virtual Volumes when data services are enabled at the application level
  • Improved storage utilization with granular level provisioning
  • Common management using Policy Based Management

 

Basic Commands for VSAN

VSAN is one of the best Products available from VMware. vSAN is a core building block for the Software-Defined Data Center.

Let us understand the different terminologies used in VSAN :

CMMDS – Cluster Monitoring, Membership, and Directory Service
CLOMD – Cluster Level Object Manager Daemon
OSFSD – Object Storage File System Daemon
CLOM – Cluster Level Object Manager
OSFS – Object Storage File System
UUID – Universally unique identifier
VSANVP – Virtual SAN Vendor Provider
SPBM – Storage Policy-Based Management
VSA – Virtual Storage Appliance
MD – Magnetic disk
SSD – Solid-State Drive
RVC – Ruby vSphere Console
RDT – Reliable Datagram Transport

Let us get into details for each one of them :

1: CMMDS

ESXi Shell, there is a  VSAN utility called cmmds-tool which stands for Clustering Monitoring, Membership and Directory Services. This tool allows you to perform a variety of operations and queries against the VSAN nodes and their associated objects.

Few examples of cmmds command:

cmmds-tool find -u uuid -f json |less

Find command Example

cmmds-tool find –t HOSTNAME

cmmds-tool find -t DISK | grep “DISK” | wc –l

cmmds-tool amimember

cmmds-tool whoami

cmmds-tool find -t DISK |grep “DISK”

List of Disks UUID

cmmds-tool find |grep name

2:CLOMD :

I will update few of the commands which will help you to determine the basic configuration of VSAN through Esxi host.

1: Tag a LUN as SSD

esxcli vsan storage list|grep -i Device|wc -l
df -h
esxcli vsan storage list|grep -i Device
esxcfg-scsidevs -a
esxcli storage nmp device list
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013188

2: Esxcli commands for VSAN :

esxcli vsan datastore name get >>>VSAN datastore Name

command
esxcli vsan network list  >>> Network configuration of VSAN


esxcli vsan cluster get  >>> Cluster information of VSAN


esxcli vsan policy getdefault  >>>VSAN storage policy