Storage connectivity to vSphere

The VMware vSphere storage architecture consists of layers of abstraction that hide the differences and manage the complexity among physical storage subsystems.

To the applications and guest operating systems inside each virtual machine, the storage subsystem appears as a virtual SCSI controller connected to one or more virtual SCSI disks. These controllers are the only types of SCSI controllers that a virtual machine can see and access. These controllers include BusLogic Parallel, LSI Logic Parallel, LSI Logic SAS, and VMware Paravirtual.

The virtual SCSI disks are provisioned from datastore elements in the datacenter. A datastore is like a storage appliance that delivers storage space for virtual machines across multiple physical hosts. Multiple datastores can be aggregated into a single logical, load-balanced pool called a datastore cluster.

The datastore abstraction is a model that assigns storage space to virtual machines while insulating the guest from the complexity of the underlying physical storage technology. The guest virtual machine is not exposed to Fibre Channel SAN, iSCSI SAN, direct attached storage, and NAS.

Each datastore is a physical VMFS volume on a storage device. NAS datastores are an NFS volume with VMFS characteristics. Datastores can span multiple physical storage subsystems. A single VMFS volume can contain one or more LUNs from a local SCSI disk array on a physical host, a Fibre Channel SAN disk farm, or iSCSI SAN disk farm. New LUNs added to any of the physical storage subsystems are detected and made available to all existing or new datastores. Storage capacity on a previously created datastore can be extended without powering down physical hosts or storage subsystems. If any of the LUNs within a VMFS volume fails or becomes unavailable, only virtual machines that use that LUN are affected. An exception is the LUN that has the first extent of the spanned volume. All other virtual machines with virtual disks residing in other LUNs continue to function as normal.

Each virtual machine is stored as a set of files in a directory in the datastore. The disk storage associated with each virtual guest is a set of files within the guest’s directory. You can operate on the guest disk storage as an ordinary file. The disk storage can be copied, moved, or backed up. New virtual disks can be added to a virtual machine without powering it down. In that case, a virtual disk file  (.vmdk) is created in VMFS to provide new storage for the added virtual disk or an existing virtual disk file is associated with a virtual machine.

VMFS is a clustered file system that leverages shared storage to allow multiple physical hosts to read and write to the same storage simultaneously. VMFS provides on-disk locking to ensure that the same virtual machine is not powered on by multiple servers at the same time. If a physical host fails, the on-disk lock for each virtual machine is released so that virtual machines can be restarted on other physical hosts.

VMFS also features failure consistency and recovery mechanisms, such as distributed journaling, a failure-consistent virtual machine I/O path, and virtual machine state snapshots. These mechanisms can aid quick identification of the cause and recovery from virtual machine, physical host, and storage subsystem failures.

VMFS also supports raw device mapping (RDM). RDM provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem (Fibre Channel or iSCSI only). RDM supports two typical types of applications:

SAN snapshot or other layered applications that run in the virtual machines. RDM better enables scalable backup offloading systems using features inherent to the SAN.

Microsoft Clustering Services (MSCS) spanning physical hosts and using virtual-to-virtual clusters as well as physical-to-virtual clusters. Cluster data and quorum disks must be configured as RDMs rather than files on a shared VMFS.

 

Supported Storage Adapters:

+++++++++++++++++++++++++

Storage adapters provide connectivity for your ESXi host to a specific storage unit or network.

ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Ethernet. ESXi accesses the adapters directly through device drivers in the VMkernel.

View Storage Adapters Information:

++++++++++++++++++++++++++++++

Use the vSphere Client to display storage adapters that your host uses and to review their information.

Procedure

1: In Inventory, select Hosts and Clusters.

2: Select a host and click the Configuration tab.

3: In Hardware, select Storage Adapters.

4:To view details for a specific adapter, select the adapter from the Storage Adapters list.

5: To list all storage devices the adapter can access, click Devices.

6: To list all paths the adapter uses, click Paths

Types of Physical Storage:

++++++++++++++++++++++

The ESXi storage management process starts with storage space that your storage administrator preallocates on different storage systems.

ESXi supports the following types of storage:

Local Storage : Stores virtual machine files on internal or directly connected external storage disks.

Networked Storage: Stores virtual machine files on external storage disks or arrays attached to your host through a direct connection or through a high-speed network.

Local Storage:

Local storage can be internal hard disks located inside your ESXi host, or it can be external storage systems located outside and connected to the host directly through protocols such as SAS or SATA.

Local storage does not require a storage network to communicate with your host. You need a cable connected to the storage unit and, when required, a compatible HBA in your host.

ESXi supports a variety of internal or external local storage devices, including SCSI, IDE, SATA, USB, and SAS storage systems. Regardless of the type of storage you use, your host hides a physical storage layer from virtual machines.

Networked Storage:

Networked storage consists of external storage systems that your ESXi host uses to store virtual machine files remotely. Typically, the host accesses these systems over a high-speed storage network.

Networked storage devices are shared. Datastores on networked storage devices can be accessed by multiple hosts concurrently. ESXi supports the following networked storage technologies.

Note

Accessing the same storage through different transport protocols, such as iSCSI and Fibre Channel, at the same time is not supported.

Fibre Channel (FC):

Stores virtual machine files remotely on an FC storage area network (SAN). FC SAN is a specialized high-speed network that connects your hosts to high-performance storage devices. The network uses Fibre Channel protocol to transport SCSI traffic from virtual machines to the FC SAN devices.

Fibre Channel Storage

In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available to the host. You can access the LUNs and create datastores for your storage needs. The datastores use the VMFS format.

Internet SCSI (iSCSI):

Stores virtual machine files on remote iSCSI storage devices. iSCSI packages SCSI storage traffic into the TCP/IP protocol so that it can travel through standard TCP/IP networks instead of the specialized FC network. With an iSCSI connection, your host serves as the initiator that communicates with a target, located in remote iSCSI storage systems.

ESXi offers the following types of iSCSI connections:

Hardware iSCSI: Your host connects to storage through a third-party adapter capable of offloading the iSCSI and network processing. Hardware adapters can be dependent and independent.

Software iSCSI :Your host uses a software-based iSCSI initiator in the VMkernel to connect to storage. With this type of iSCSI connection, your host needs only a standard network adapter for network connectivity.

You must configure iSCSI initiators for the host to access and display iSCSI storage devices.

iSCSI Storage depicts different types of iSCSI initiators.

iSCSI Storage

In the left example, the host uses the hardware iSCSI adapter to connect to the iSCSI storage system.

In the right example, the host uses a software iSCSI adapter and an Ethernet NIC to connect to the iSCSI storage.

iSCSI storage devices from the storage system become available to the host. You can access the storage devices and create VMFS datastores for your storage needs.

Network-attached Storage (NAS)

Stores virtual machine files on remote file servers accessed over a standard TCP/IP network. The NFS client built into ESXi uses Network File System (NFS) protocol version 3 to communicate with the NAS/NFS servers. For network connectivity, the host requires a standard network adapter.

NFS Storage

Shared Serial Attached SCSI (SAS)

Stores virtual machines on direct-attached SAS storage systems that offer shared access to multiple hosts. This type of access permits multiple hosts to access the same VMFS datastore on a LUN

What is CLOMD in VSAN??

CLOMD (Cluster Level Object Manager Daemon) plays a key role in the operation of a vSAN cluster. It runs on every ESXi host and is responsible for new object creation, initiating repair of existing objects after failures, all types of data moves and evacuations (For example: Enter Maintenance Mode, Evacuate data on disk removal from vSAN), maintaining balance and thus triggering rebalancing, implementing policy changes, etc. 

It does not actually participate in the data path, but it triggers data path operations and as such is a critical component during a number of management workflows and failure handling scenarios. 

Virtual machine power on, or Storage vMotion to vSAN are two operations where CLOMD is required (and which are not that obvious), as those operations require the creation of a swap object, and object creation requires CLOMD. 

Similarly, starting with vSAN 6.0, memory snapshots are maintained as objects, so taking a snapshot with memory state will also require the CLOMD.

Cluster health – CLOMD liveness check :

This checks if the Cluster Level Object Manager (CLOMD) daemon is alive or not. It does so by first checking that the service is running on all ESXi hosts, and then contacting the service to retrieve run-time statistics to verify that CLOMD can respond to inquiries. 

Note: This does not ensure that all of the functionalities discussed above (For example: Object creation, rebalancing) actually work, but it gives a first level assessment as to the health of CLOMD.

CLOMD ERROR 

If any of the ESXi hosts are disconnected, the CLOMD liveness state of the disconnected host is shown as unknown. If the Health service is not installed on a particular ESXi host, the CLOMD liveness state of all the ESXi hosts is also reported as unknown.

If the CLOMD service is not running on a particular ESXi hosts, the CLOMD liveness state of one host is abnormal.

For this test to succeed, the health service needs to be installed on the ESXi host and the CLOMD service needs to be running. To get the state status of the CLOMD service, on the ESXi host, run this command:

/etc/init.d/clomd status

If the CLOMD health check is still failing after these steps or if the CLOMD health check continues to fail on a regular basis, open a support request with VMware Support.

Examples:

++++++++

In the /var/run/log/clomd.log file, you see logs similar to:

2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMProcessWorkItem: Op REPAIR starts:1804289387 
2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMReconfigure: Reconfiguring aae9cf268-cd5e-abc4-448d-050010d45c96 workItem type REPAIR 
2017-04-19T03:59:32.408Z 120360 (482850097440)(opID:1804289387)CLOMReplacementPreWorkRepair: Repair needed. 1 absent/degraded data components for ae9cf268-cd5e-abc4-448d-050010d45c96 found 

^^^ Here, CLOMD crashed while attempting to repair object with UUID ae9cf268-cd5e-abc4-448d-050010d45c96 . The vSAN health check will report CLOMD liveness issue. A CLOMD restart will fail because each time it is restarted, it will fail again while attempting to repair the 0 sized object. Swap objects can be the only vSAN objects that can be zero sized, so this issue can occus only with swap objects.

Host crash Diagnostic Partitions

A diagnostic partition can be on the local disk where the ESXi software is installed. This is the default configuration for ESXi Installable. You can also use a diagnostic partition on a remote disk shared between multiple hosts. If you want to use a network diagnostic partition, you can install ESXi Dump Collector and configure the networked partition.

The following considerations apply:

>> A diagnostic partition cannot be located on an iSCSI LUN accessed through the software iSCSI or dependent hardware iSCSI adapter. For more information about diagnostic partitions with iSCSI, see General Boot from iSCSI SAN Recommendations in the vSphere Storage documentation.

>> Each host must have a diagnostic partition of 110MB. If multiple hosts share a diagnostic partition on a SAN LUN, the partition should be large enough to accommodate core dumps of all hosts.

>>If a host that uses a shared diagnostic partition fails, reboot the host and extract log files immediately after the failure. Otherwise, the second host that fails before you collect the diagnostic data of the first host might not be able to save the core dump.

Diagnostic Partition Creation:

++++++++++++++++++++++

You can use the vSphere Client to create the diagnostic partition on a local disk or on a private or shared SAN LUN. You cannot use vicfg-dumppart to create the diagnostic partition. The SAN LUN can be set up with FibreChannel or hardware iSCSI. SAN LUNs accessed through a software iSCSI initiator are not supported.

Managing Core Dumps:

+++++++++++++++++++

With esxcli system coredump, you can manage local diagnostic partitions or set up core dump on a remote server in conjunction with ESXi Dump Collector. For information about ESXi Dump Collector, see the vSphere Networking documentation.

Managing Local Core Dumps with ESXCLI:

++++++++++++++++++++++++++++++

The following example scenario changes the local diagnostic partition with ESXCLI. Specify one of the connection options listed in Connection Options in place of <conn_options>.

To manage a local diagnostic partition

1: Show the diagnostic partition the VMkernel uses and display information about all partitions that can be used as diagnostic partitions.

esxcli <conn_options> system coredump partition list

2: Deactivate the current diagnostic partition.

esxcli <conn_options> system coredump partition set –unconfigure

The ESXi system is now without a diagnostic partition, and you must immediately set a new one.

3: Set the active partition to naa.<naa_ID>.

esxcli <conn_options> system coredump partition set –partition=naa.<naa_ID>

4: List partitions again to verify that a diagnostic partition is set.

esxcli <conn_options> system coredump partition list

If a diagnostic partition is set, the command displays information about it. Otherwise, the command shows that no partition is activated and configured.

Managing Core Dumps with ESXi Dump Collector:

++++++++++++++++++++++++++++++++++++

By default, a core dump is saved to the local disk. You can use ESXi Dump Collector to keep core dumps on a network server for use during debugging. ESXi Dump Collector is especially useful for Auto Deploy, but supported for any ESXi 5.0 host. ESXi Dump Collector supports other customization, including sending core dumps to the local disk.

ESXi Dump Collector is included with the vCenter Server autorun.exe application. You can install ESXi Dump Collector on the same system as the vCenter Server service or on a different Windows or Linux machine.

You can configure ESXi Dump Collector by using the vSphere Client or ESXCLI. Specify one of the connection options listed in Connection Options in place of <conn_options>.

To manage core dumps with ESXi Dump Collector:

++++++++++++++++++++++++++++++++++++

1: Set up an ESXi system to use ESXi Dump Collector by running esxcli system coredump.

esxcli <conn_options> system coredump network set –interface-name vmk0 –server-ipv4=1-XX.XXX –port=6500

You must specify a VMkernel port with –interface-name, and the IP address and port of the server to send the core dumps to. If you configure an ESXi system that is running inside a virtual machine, you must choose a VMkernel port that is in promiscuous mode.

2: Enable ESXi Dump Collector.

esxcli <conn_options> system coredump network set –enable=true

3: (Optional) Check that ESXi Dump Collector is configured correctly.

esxcli <conn_options> system coredump network get

The host on which you have set up ESXi Dump Collector sends core dumps to the specified server by using the specified VMkernel NIC and optional port.

Managing Local Core Dumps with ESXCLI:

++++++++++++++++++++++++++++++

The following example scenario changes the local diagnostic partition with ESXCLI. Specify one of the connection options listed in Connection Options in place of <conn_options>.

To manage a local diagnostic partition

1: Show the diagnostic partition the VMkernel uses and display information about all partitions that can be used as diagnostic partitions.

esxcli <conn_options> system coredump partition list

2: Deactivate the current diagnostic partition.

esxcli <conn_options> system coredump partition set –unconfigure

The ESXi system is now without a diagnostic partition, and you must immediately set a new one.

3: Set the active partition to naa.<naa_ID>.

esxcli <conn_options> system coredump partition set –partition=naa.<naa_ID>

4: List partitions again to verify that a diagnostic partition is set.

esxcli <conn_options> system coredump partition list

If a diagnostic partition is set, the command displays information about it. Otherwise, the command shows that no partition is activated and configured.

Additional Information : ESXi Network Dump Collector in VMware vSphere 5.x/6.0 

 

Core Dumps for VSAN

If your vSAN cluster uses encryption, and if an error occurs on the ESXi host, the resulting core dump is encrypted to protect customer data. Core dumps which is present in the vm-support package are also encrypted.

Note:

Core dumps can contain sensitive information. Check with your Data Security Team and Privacy Policy when handling core dumps.

Core Dumps on ESXi Hosts

When an ESXi host crashes, an encrypted core dump is generated and the host reboots. The core dump is encrypted with the host key that is in the ESXi key cache.

  • In most cases, vCenter Server retrieves the key for the host from the KMS and attempts to push the key to the ESXi host after reboot. If the operation is successful, you can generate the vm-support package and you can decrypt or re-encrypt the core dump.

  • If vCenter Server cannot access the ESXi host, you might be able to retrieve the key from the KMS.

  • If the host used a custom key, and that key differs from the key that vCenter Server pushes to the host, you cannot manipulate the core dump. Avoid using custom keys.

Core Dumps and vm-support Packages

When you contact VMware Technical Support because of a serious error, your support representative usually asks you to generate a vm-supportpackage. The package includes log files and other information, including core dumps. If support representatives cannot resolve the issues by looking at log files and other information, you can decrypt the core dumps to make relevant information available. Follow your organization’s security and privacy policy to protect sensitive information, such as host keys.

Core Dumps on vCenter Server Systems

A core dump on a vCenter Server system is not encrypted. vCenter Server already contains potentially sensitive information. At the minimum, ensure that the Windows system on which vCenter Server runs or the vCenter Server Appliance is protected. You also might consider turning off core dumps for the vCenter Server system. Other information in log files can help determine the problem.

vSAN issue fixed in 6.5 release

  • An ESXi host fails with purple diagnostic screen when mounting a vSAN disk group :Due to an internal race condition in vSAN, an ESXi host might fail with a purple diagnostic screen when you attempt to mount a vSAN disk group.This issue is resolved in this release.
  • Using objtool on a vSAN witness host causes an ESXi host to fail with a purple diagnostic screen : If you use objtool on a vSAN witness host, it performs an I/O control (ioctl) call which leads to a NULL pointer in the ESXi host and the host crashes.This issue is resolved in this release.
  • Hosts in a vSAN cluster have high congestion which leads to host disconnects :When vSAN components with invalid metadata are encountered while an ESXi host is booting, a leak of reference counts to SSD blocks can occur. If these components are removed by policy change, disk decommission, or other method, the leaked reference counts cause the next I/O to the SSD block to get stuck. The log files can build up, which causes high congestion and host disconnects.This issue is resolved in this release.
  • Cannot enable vSAN or add an ESXi host into a vSAN cluster due to corrupted disks :When you enable vSAN or add a host to a vSAN cluster, the operation might fail if there are corrupted storage devices on the host. Python zdumps are present on the host after the operation, and the vdq -q command fails with a core dump on the affected host.This issue is resolved in this release.
  • vSAN Configuration Assist issues a physical NIC warning for lack of redundancy when LAG is configured as the active uplink :When the uplink port is a member of a Link Aggregation Group (LAG), the LAG provides redundancy. If the Uplink port number is 1, vSAN Configuration Assist issues a warning that the physical NIC lacks redundancy.This issue is resolved in this release.
  • vSAN cluster becomes partitioned after the member hosts and vCenter Server reboot :If the hosts in a unicast vSAN cluster and the vCenter Server are rebooted at the same time, the cluster might become partitioned. The vCenter Server does not properly handle unstable vpxd property updates during a simultaneous reboot of hosts and vCenter Server.This issue is resolved in this release.
  • An ESXi host fails with a purple diagnostic screen due to incorrect adjustment of read cache quota :The vSAN mechanism to that controls read cache quota might make incorrect adjustments that result in a host failure with purple diagnostic screen.This issue is resolved in this release.
  • Large File System overhead reported by the vSAN capacity monitor :When deduplication and compression are enabled on a vSAN cluster, the Used Capacity Breakdown (Monitor > vSAN > Capacity) incorrectly displays the percentage of storage capacity used for file system overhead. This number does not reflect the actual capacity being used for file system activities. The display needs to correctly reflect the File System overhead for a vSAN cluster with deduplication and compression enabled.This issue is resolved in this release.
  • vSAN health check reports CLOMD liveness issue due to swap objects with size of 0 bytes :If a vSAN cluster has objects with size of 0 bytes, and those objects have any components in need of repair, CLOMD might crash. The CLOMD log in /var/run/log/clomd.log might display logs similar to the following:

2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMProcessWorkItem: Op REPAIR starts:1804289387
2017-04-19T03:59:32.403Z 120360 (482850097440)(opID:1804289387)CLOMReconfigure: Reconfiguring ae9cf658-cd5e-dbd4-668d-020010a45c75 workItem type REPAIR 
2017-04-19T03:59:32.408Z 120360 (482850097440)(opID:1804289387)CLOMReplacementPreWorkRepair: Repair needed. 1 absent/degraded data components for ae9cf658-cd5e-dbd4-668d-020010a45c75 found   

  • The vSAN health check reports a CLOMD liveness issue. Each time CLOMD is restarted it crashes while attempting to repair the affected object. Swap objects are the only vSAN objects that can have size of zero bytes.

This issue is resolved in this release.

  • vSphere API FileManager.DeleteDatastoreFile_Task fails to delete DOM objects in vSAN :If you delete vmdks from the vSAN datastore using FileManager.DeleteDatastoreFile_Task API, through filebrowser or SDK scripts, the underlying DOM objects are not deleted.These objects can build up over time and take up space on the vSAN datastore.This issue is resolved in this release.
  • A host in a vSAN cluster fails with a purple diagnostic screen due to internal race condition :When a host in a vSAN cluster reboots, a race condition might occur between PLOG relog code and vSAN device discovery code. This condition can corrupt memory tables and cause the ESXi host to fail and display a purple diagnostic screen.This issue is resolved in this release.
  • Attempts to install or upgrade an ESXi host with ESXCLI or vSphere PowerCLI commands might fail for esx-base, vsan and vsanhealth VIBsFrom ESXi 6.5 Update 1 and above, there is a dependency between the esx-tboot VIB and the esx-base VIB and you must also include the esx-tboot VIB as part of the vib update command for successful installation or upgrade of ESXi hosts.Workaround: Include also the esx-tboot VIB as part of the vib update command. For example:esxcli software vib update -n esx-base -n vsan -n vsanhealth -n esx-tboot -d /vmfs/volumes/datastore1/update-from-esxi6.5-6.5_update01.zip

Configure vSAN Stretched Cluster

Stretched clusters extend the vSAN cluster from a single data site to two sites for a higher level of availability and intersite load balancing. Stretched clusters are typically deployed in environments where the distance between data centers is limited, such as metropolitan or campus environments.

You can use stretched clusters to manage planned maintenance and avoid disaster scenarios, because maintenance or loss of one site does not affect the overall operation of the cluster. In a stretched cluster configuration, both data sites are active sites. If either site fails, vSAN uses the storage on the other site. vSphere HA restarts any VM that must be restarted on the remaining active site.

Configure a vSAN cluster that stretches across two geographic locations or sites.

Prerequisites

  • Verify that you have a minimum of three hosts: one for the preferred site, one for the secondary site, and one host to act as a witness.

  • Verify that you have configured one host to serve as the witness host for the stretched cluster. Verify that the witness host is not part of the vSAN cluster, and that it has only one VMkernel adapter configured for vSAN data traffic.

  • Verify that the witness host is empty and does not contain any components. To configure an existing vSAN host as a witness host, first evacuate all data from the host and delete the disk group.

Procedure

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Fault Domains and Stretched Cluster.
  4. Click the Stretched Cluster Configure button to open the stretched cluster configuration wizard.
  5. Select the fault domain that you want to assign to the secondary site and click >>.

    The hosts that are listed under the Preferred fault domain are in the preferred site.

  6. Click Next.
  7. Select a witness host that is not a member of the vSAN stretched cluster and click Next.
  8. Claim storage devices on the witness host and click Next.

    Claim storage devices on the witness host. Select one flash device for the cache tier, and one or more devices for the capacity tier.

  9. On the Ready to complete page, review the configuration and click Finish.

 

You can change the witness host for a vSAN stretched cluster.

Change the ESXi host used as a witness host for your vSAN stretched cluster.

Prerequisites

Verify that the witness host is not in use.

Procedure

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Fault Domains and Stretched Cluster.
  4. Click the Change witness host button.
  5. Select a new host to use as a witness host, and click Next.
  6. Claim disks on the new witness host, and click Next.
  7. On the Ready to complete page, review the configuration, and click Finish.

 

You can configure the secondary site as the preferred site. The current preferred site becomes the secondary site.

Procedure

  1. Navigate to the vSAN cluster in the vSphere Web Client.
  2. Click the Configure tab.
  3. Under vSAN, click Fault Domains and Stretched Cluster.
  4. Select the secondary fault domain and click the Mark Fault Domain as preferred for Stretched Cluster icon ().
  5. Click Yes to confirm.

    The selected fault domain is marked as the preferred fault domain.

VixDiskLib API

On ESXi hosts, virtual machine disk (VMDK) files are usually located under one of the /vmfs/volumes, perhaps on shared storage. Storage volumes are visible from the vSphere Client, in the inventory of hosts and clusters. Typical names are datastore1 and datastore2. To see a VMDK file, click Summary > Resources > Datastore, right-click Browse Datastore, and select a virtual machine.
On Workstation, VMDK files are stored in the same directory with virtual machine configuration (VMX) files, for example, /path/to/disk on Linux or C:\My Documents\My Virtual Machines on Windows.
VMDK files store data representing a virtual machine’s hard disk drive. Almost the entire portion of a VMDK file is the virtual machine’s data, with a small portion allotted to overhead.

Initialize the Library:
+++++++++++++++++

VixDiskLib_Init() initializes the old virtual disk library. The arguments majorVersion and minorVersion represent the VDDK library’s release number and dot-release number. The optional third, fourth, and fifth arguments specify log, warning, and panic handlers. DLLs and shared objects may be located in libDir.
VixError vixError = VixDiskLib_Init(majorVer, minorVer, &logFunc, &warnFunc, &panicFunc, libDir);
You should call VixDiskLib_Init() only once per process because of internationalization restrictions, at the beginning of your program. You should call VixDiskLib_Exit() at the end of your program for cleanup. For multithreaded programs you should write your own logFunc because the default function is not thread safe.
In most cases you should replace VixDiskLib_Init() with VixDiskLib_InitEx(), which allows you to specify a configuration file.

Virtual Disk Types:
++++++++++++++++

The following disk types are defined in the virtual disk library:
>>>
VIXDISKLIB_DISK_MONOLITHIC_SPARSE – Growable virtual disk contained in a single virtual disk file. This is the default type for hosted disk, and the only setting in the Virtual Disk API .
>>>
VIXDISKLIB_DISK_MONOLITHIC_FLAT – Preallocated virtual disk contained in a single virtual disk file. This takes time to create and occupies a lot of space, but might perform better than sparse.
>>>
VIXDISKLIB_DISK_SPLIT_SPARSE – Growable virtual disk split into 2GB extents (s sequence). These files can to 2GB, then continue growing in a new extent. This type works on older file systems.
>>>
VIXDISKLIB_DISK_SPLIT_FLAT – Preallocated virtual disk split into 2GB extents (f sequence). These files start at 2GB, so they take a while to create, but available space can grow in 2GB increments.
>>>
VIXDISKLIB_DISK_VMFS_FLAT – Preallocated virtual disk compatible with ESX 3 and later. Also known as thick disk. This managed disk type is discussed in Managed Disk and Hosted Disk.
>>>
VIXDISKLIB_DISK_VMFS_SPARSE – Employs a copy-on-write (COW) mechanism to save storage space.
>>>
VIXDISKLIB_DISK_VMFS_THIN – A Growable virtual disk that consumes only as much space as needed, compatible with ESX 3 or later, supported by VDDK 1.1 or later, and highly recommended.
>>>
VIXDISKLIB_DISK_STREAM_OPTIMIZED – Monolithic sparse format compressed for streaming. Stream optimized format does not support random reads or writes.

Check the sample programs on :

http://pubs.vmware.com/vsphere-65/index.jsp#com.vmware.vddk.pg.doc/vddkSample.7.2.html#995259

 

vSAN Prerequisites and Requirements for Deployment

Before delving into the installation and configuration of vSAN, it’s necessary to discuss the requirements and the prerequisites. VMware vSphere is the foundation of every vSAN based virtual infrastructure.

VMware vSphere:
+++++++++++++++

vSAN was first released with VMware vSphere 5.5 U1. Additional versions of vSAN were released with VMware vSphere 6.0 (vSAN 6.0), VMware vSphere 6.0 U1 (vSAN 6.1), and VMware vSphere 6.0 U2 (vSAN 6.2). Each of these releases included additional vSAN features.

VMware vSphere consists of two major components: the vCenter Server management tool and the ESXi hypervisor. To install and configure vSAN, both vCenter Server and ESXi are required.
VMware vCenter Server provides a centralized management platform for VMware vSphere environments. It is the solution used to provision new virtual machines (VMs), configure hosts, and perform many other operational tasks associated with managing a virtualized infrastructure.
To run a fully supported vSAN environment, the vCenter server 5.5 U1 platform is the minimum requirement, although VMware strongly recommends using the latest version of vSphere where possible. vSAN can be managed by both the Windows version of vCenter server and the vCenter Server appliance (VCSA). vSAN is configured and monitored via the vSphere web client, and this also needs a minimum version of 5.5 U1 for support. vSAN can also be fully configured and managed through the command-line interface (CLI) and the vSphere application programming interface (API) for those wanting to automate some (or all) of the aspects of vSAN configuration, monitoring, or management. Although a single cluster can contain only one vSAN datastore, a vCenter server can manage multiple vSAN and compute clusters.

ESXi:
+++++

VMware ESXi is an enterprise-grade virtualization product that allows you to run multiple instances of an operating system in a fully isolated fashion on a single server. It is a baremetal solution, meaning that it does not require a guest-OS and has an extremely thin footprint. ESXi is the foundation for the large majority of virtualized environments worldwide. For standard datacenter deployments, vSAN requires a minimum of three ESXi hosts (where
each host has local storage and is contributing this storage to the vSAN datastore) to form a supported vSAN cluster. This is to allow the cluster to meet the minimum availability requirements of tolerating at least one host failure.

With vSAN 6.1 (released with vSphere 6.0 U1), VMware introduced the concept of a 2-node vSAN cluster primarily for remote office/branch office deployments. There are some additional considerations around the use of a 2-node vSAN cluster, including the concept of a witness host. As of vSAN 6.0 a maximum of 64 ESXi hosts in a cluster is supported, a significant increase from the 32 hosts that were supported in the initial vSAN release that was part of vSphere
5.5, from here on referred to as vSAN 5.5. The ESXi hosts must be running version 6.0 at a minimum to support 64 hosts however. At a minimum, it is recommended that a host have at least 6 GB of memory. If you configure a host to contain the maximum number of disk groups, we recommend that the host be configured with a minimum of 32 GB of memory. vSAN does not consume all of this memory, but it is required for the maximum configuration. The vSAN host memory requirement is directly related to the number of physical disks in the host and the number of disk groups configured on the host.In all cases we recommend to go with more than 32 GB per host to ensure that your workloads, vSAN and the hypervisor have sufficient resources to ensure an optimal user experience. Below is the Diagram for Minimum host contributing storage :

Cache and Capacity Devices:
+++++++++++++++++++++++++++
With the release of vSAN 6.0, VMware introduced the new all-flash version of vSAN. vSAN was only available as a hybrid configuration with version 5.5. A hybrid configuration is where the cache tier is made up of flash-based devices and the capacity tier is made up of magnetic disks. In the all-flash version, both the cache tier and capacity tier are made up of flash devices. The flash devices of the cache and capacity tier are typically a different grade of flash device in terms of performance and endurance. This allows you, under certain circumstances, to create all-flash configurations at the cost of SAS-based magnetic disk configurations.

 

vSAN Requirements:
++++++++++++++++++
Before enabling vSAN, it is highly recommended that the vSphere administrator validate that the environment meets all the prerequisites and requirements. To enhance resilience, this list also includes recommendations from an infrastructure perspective:
>>Minimum of three ESXi hosts for standard datacenter deployments. Minimum of two ESXi hosts and a witness host for the smallest deployment, for example, remote office/branch office.
>>Minimum of 6 GB memory per host to install ESXi.
>>VMware vCenter Server.
>>At least one device for the capacity tier. One hard disk for hosts contributing storage to vSAN datastore in a hybrid configuration; one flash device for hosts contributing storage to vSAN datastore in an all-flash configuration.
>>At least one flash device for the cache tier for hosts contributing storage to vSAN datastore, whether hybrid or all-flash.
>>One boot device to install ESXi.
>>At least one disk controller. Pass-through/JBOD mode capable disk controller preferred.
>>Dedicated network port for vSAN–VMkernel interface. 10 GbE preferred, but 1 GbE supported for smaller hybrid configurations. With 10 GbE, the adapter does not need to be dedicated to vSAN traffic, but can be shared with other traffic types, such as management traffic, vMotion traffic, etc.
>>L3 multicast is required on the vSAN network.

vSAN Ready Nodes:
++++++++++++++++++
vSAN ready nodes are a great alternative to manually selecting components. Ready nodes would also be the preferred way of building a vSAN configuration. Various vendors have gone through the exercise for you and created configurations that are called vSAN ready nodes. These nodes consist of tested and certified hardware only and, in our opinion,provide an additional guarantee.

For more information please follow : https://www.vsan-essentials.com/

vSAN Performance Capabilities

It is difficult to predict what your performance will be because every workload and every combination of hardware will provide different results. After the initial vSAN launch, VMware announced the results of multiple performance tests
(http://blogs.vmware.com/vsphere/2014/03/supercharge-virtual-san-cluster-2-millioniops.html).

The results Vmwarere impressive, to say the least, but Vmwarere only the beginning. With the 6.1 release, performance of hybrid had doubled and so had the scale, allowing for 8 million IOPS per cluster. The introduction of all-flash hoVmwarever completely changed the game. This alloVmwared vSAN to reach 45k IOPS per diskgroup, and remember you can have 5 per disk group, but it also introduced sub millisecond latency. (Just for completeness sake, theoretically it would be possible to design a vSAN cluster that could deliver over 16 million IOPS with sub millisecond latency using an all-flash configuration.)

Do note that these performance numbers should not be used as a guarantee for what you can achieve in your environment. These are theoretical tests that are not necessarily (and most likely not) representative of the I/O patterns you will see in your own environment (and so results will vary). Nevertheless, it does prove that vSAN is capable of delivering a high performance environment. At the time of writing the latest performance document available is for vSAN 6.0, which can be found here:
http://www.vmware.com/files/pdf/products/vsan/VMware-Virtual-San6-ScalabilityPerformance-Paper.pdf.

Vmware highly recommend hoVmwarever to search for the latest version as Vmware are certain that there will be an updated version with the 6.2 release of vSAN. One thing that stands out though when reading these types of papers is that all performance tests and reference architectures by VMware that are publicly available have been done with 10 GbE networking configurations. For our design scenarios, Vmware will use 10 GbE as the golden standard because it is heavily recommended by VMware and increases throughput and loVmwarers latency. The only configuration where this does not apply is ROBO (remote office/branch office). This 2-node vSAN configuration is typically deployed using 1 GbE since the number of VMs running is typically relatively low (up to 20 in total). Different configuration options for networking, including the use of Network I/O Control.