vSAN performance diagnostics reports: “One or more disk group(s) are not in active use”

This issue means that the specified disk groups do not have IOs during some period of the evaluated time duration, which limits the maximum possible performance on the vSAN cluster. The goals of Max IOPS and Max Throughput require IO activity from each disk group.

Here is a list of possible solutions:

1:If you are running a benchmark, you may increase the number of virtual machines or the number of VMDKs per virtual machine so that all disk groups have some object components.

2:Alternatively, it is possible that your benchmark is not issuing IOs to all the VMDKs that were created.

3:If you do not want to increase the number of virtual machines or the number of VMDKs, you may increase the “Number of disk stripes per object” (the default value is 1) in the Virtual SAN Storage Policy with which the VMDKs were created.

4:You can apply the policy manually to existing or new virtual machines and VMDKs.

Understanding vSAN memory consumption in ESXi 6.5.0d/ 6.0 U3

To calculate vSAN memory consumption in these releases, use this equation:

BaseConsumption + (NumDiskGroups * (DiskGroupBaseConsumption + (SSDMemOverheadPerGB * SSDSize))) +(NumCapacityDisks * CapacityDiskBaseConsumption)

Where: BaseConsumption: is the fixed amount of memory consumed by vSAN per ESXi host. NumDiskGroups: is the number of disk groups in the host, this value should range from 1 to 5. DiskGroupBaseConsumption: is the fixed amount of memory allocated to each individual disk group in the host. This is mainly used to allocate resources used to support inflight operations on a per disk group level. SSDMemOverheadPerGB: is the amount of memory allocated per GB of SSD. SSDSize: is the size of the SSD disk in GB. NumCapacityDisks: is the number of capacity disks in the host (across all the diskgroups). CapacityDiskBaseConsumption: is the amount of memory allocated per capacity disk.

Constants: BaseConsumption = 5426 MB DiskGroupBaseConsumption = 636 MB SSDMemOverheadPerGB (hydrid) = 8 MB SSDMemOverheadPerGB (allflash) = 14 MB CapacityDiskBaseConsumption= 70 MB

Note: In these releases, encryption and deduplication features have no impact on memory consumption.

Example: One disk group per host, all flash configuration:

BaseConsumption + 
(NumDiskGroups * (DiskGroupBaseConsumption + (SSDMemOverheadPerGB * SSDSize))) +
(NumCapacityDisks * CapacityDiskBaseConsumption)
=
5426 MB + (1 * (636 MB + (14MB * 600))) + (3 * 70 MB)
=
14672 MB

NEW FEATURES IN VSAN 6.6.1

Integration with VUM, VMware Update Manager

vSAN 6.6.1 is fully integrated with VUM to provide a powerful new update process that makes sure your vSAN cluster is up to date.

vSAN Performance Diagnostics:

The feature offers advice such as increasing a stripe width, adding more VMs or bringing more disk groups into use to improve performance. The feature needs the Customer Experience Improvement Program (CEIP) to be enabled, as well as the vSAN Performance Service. Once these have been configured, and the diagnostic information has been gathered for approx. 1 hour, benchmark results can then be examined to see if the maximum number of IOPS, maximum throughtput or minimum latency has been achieved. If not, guidance is provided on how best to achieve these objectives in a benchmark run.



Improved vCenter and unicast behaviour for cluster membership:

If vCenter is down and changes are then made to the cluster, when vCenter recovers, it may reset those cluster changes(UUID ). In vSAN 6.6.1, there is a new property for vSAN called “configuration generation”. This addresses the issue of tracking vSAN membership and will avoid the issue.

New health checks for Update Manager (VUM)
 

The health check has been updated to include checks for new features, such as VUM.

Change the EVC Mode for a Cluster

Verify :

Verify that all hosts in the cluster have supported CPUs for the EVC mode you want to enable. See http://kb.vmware.com/kb/1003212 for a list of supported CPUs.
Verify that all hosts in the cluster are connected and registered on vCenter Server. The cluster cannot contain a disconnected host.
Virtual machines must be in the following power states, depending on whether you raise or lower the EVC mode.

EVC Mode
Virtual Machine Power Action
Raise the EVC mode to a CPU baseline with more features.
Running virtual machines can remain powered on. New EVC mode features are not available to the virtual machines until they are powered off and powered back on again. A full power cycling is required. Rebooting the guest operating system or suspending and resuming the virtual machine is not sufficient.
Lower the EVC mode to a CPU baseline with fewer features.
Power off virtual machines if they are powered on and running at a higher EVC Mode than the one you intend to enable.
Procedure:

1.Select a cluster in the inventory.
2.Click the Manage tab and click Settings.
3.Select VMware EVC and click Edit.
4.Select whether to enable or disable EVC.
Option
Description
Disable EVC
The EVC feature is disabled. CPU compatibility is not enforced for the hosts in this cluster.
Enable EVC for AMD Hosts
The EVC feature is enabled for AMD hosts.
Enable EVC for Intel Hosts
The EVC feature is enabled for Intel hosts.

1.From the VMware EVC Mode drop-down menu, select the baseline CPU feature set that you want to enable for the cluster.
2.If you cannot select the EVC Mode, the Compatibility pane displays the reason, and the relevant hosts for each reason.
Click OK.

About this task:

Several EVC approaches are available to ensure CPU compatibility:

1.If all the hosts in a cluster are compatible with a newer EVC mode, you can change the EVC mode of an existing EVC cluster.
2.You can enable EVC for a cluster that does not have EVC enabled.
3.You can raise the EVC mode to expose more CPU features.
4.You can lower the EVC mode to hide CPU features and increase compatibility.

Limits on Simultaneous Migrations

vCenter Server places limits on the number of simultaneous virtual machine migration and provisioning operations that can occur on each host, network, and datastore. Each operation, such as a migration with vMotion or cloning a virtual machine, is assigned a resource cost. Each host, datastore, or network resource, has a maximum cost that it can support at any one time. Any new migration or provisioning operation that causes a resource to exceed its maximum cost does not proceed immediately, but is queued until other operations complete and release resources. Each of the network, datastore, and host limits must be satisfied for the operation to proceed.

Network Limits:

Network limits apply only to migrations with vMotion. Network limits depend on the version of ESXi and the network type. All migrations with vMotion have a network resource cost of 1.

Network Limits for Migration with vMotion
Operation
ESXi Version
Network Type
Maximum Cost
vMotion
5.0, 5.1, 5.5, 6.0
1GigE
4
vMotion
5.0, 5.1, 5.5, 6.0
10GigE
8

Datastore Limits:

Datastore limits apply to migrations with vMotion and with Storage vMotion. A migration with vMotion has a resource cost of 1 against the shared virtual machine’s datastore. A migration with Storage vMotion has a resource cost of 1 against the source datastore and 1 against the destination datastore.

Datastore Limits and Resource Costs for vMotion and Storage vMotion
Operation ESXi Version Maximum Cost Per Datastore Datastore Resource Cost
vMotion
5.0, 5.1, 5.5, 6.0
128
1
Storage vMotion
5.0, 5.1, 5.5, 6.0
128
16

Host Limits: >>

Host limits apply to migrations with vMotion, Storage vMotion, and other provisioning operations such as cloning, deployment, and cold migration. >> All hosts have a maximum cost per host of 8. For example, on an ESXi 5.0 host, you can perform 2 Storage vMotion operations, or 1 Storage vMotion and 4 vMotion operations.

Host Migration Limits and Resource Costs for vMotion, Storage vMotion, and Provisioning Operations
Operation
ESXi Version
Derived Limit Per Host
Host Resource Cost
vMotion
5.0, 5.1, 5.5, 6.0
8
1
Storage vMotion
5.0, 5.1, 5.5, 6.0
2
4
vMotion Without Shared Storage
5.1, 5.5, 6.0
2
4
Other provisioning operations
5.0, 5.1, 5.5, 6.0
8
1

 

Edit a Fileset Template

Edit a fileset template to change the set of data that the template is defined .

NOTE: ++++++++

>The Rubrik cluster applies the changes made to a fileset template to all filesets that are derived from the template.

>>Fileset changes only apply to new backups not to existing backups.

STEPS:

++++++

1. Log in to the Rubrik web UI.

2. In the left-side menu, click Protection > Linux Hosts.

3. Click Templates.

4. Select a template.

5. Open the ellipsis menu and select Edit.

6. Make changes to the field values.

7. Click Update.

Edit a Fileset

NOTE:
+++++
>>Editing a fileset that is derived from a fileset template detaches the fileset from the template. 
>>The edited fileset changes to a custom fileset.

STEPS:
++++++

1. Log in to the Rubrik web UI.
2. In the left-side menu, click Protection > Linux Hosts.
3. Click Filesets.
4. Select a fileset.
5. Open the ellipsis menu and select Edit.
 When the fileset is derived from a fileset template, a warning dialog box appears. Click Continue.
6. Make changes to the field values.
7. Click Update.

The Rubrik cluster modifies the fileset and applies the changes to all new backups based on the fileset.

Delete vCenter Details

NOTE:

+++++

The vCenter Server must be online in order for the deletion process to succeed.

1. Log in to the web UI on the relevant Rubrik cluster.

2. Click the gear icon on the top bar of the web UI (Settings menu).

3. From the Settings menu, select Manage vCenters.

4. Click the ellipsis icon of a vCenter Server entry.

5. Click Delete.

A confirmation dialog box will appear.

6. Click Delete.

.

Delete a User-Account

1. Log in to the web UI on the relevant Rubrik cluster.

2. Click the gear icon on the top bar of the web UI.

3. From the Settings menu, select Manage Users.

4. Scroll the page or use the search field to locate a user.

5. Click the ellipsis icon next to the user account entry.

6. Select Delete.

7. Click Delete.