NOTE: +++++ >>Editing a fileset that is derived from a fileset template detaches the fileset from the template. >>The edited fileset changes to a custom fileset. STEPS: ++++++ 1. Log in to the Rubrik web UI. 2. In the left-side menu, click Protection > Linux Hosts. 3. Click Filesets. 4. Select a fileset. 5. Open the ellipsis menu and select Edit. When the fileset is derived from a fileset template, a warning dialog box appears. Click Continue. 6. Make changes to the field values. 7. Click Update. The Rubrik cluster modifies the fileset and applies the changes to all new backups based on the fileset.
Author: tapasmahanta124
Delete vCenter Details
NOTE:
+++++
The vCenter Server must be online in order for the deletion process to succeed.
1. Log in to the web UI on the relevant Rubrik cluster.
2. Click the gear icon on the top bar of the web UI (Settings menu).
3. From the Settings menu, select Manage vCenters.
4. Click the ellipsis icon of a vCenter Server entry.
5. Click Delete.
A confirmation dialog box will appear.
6. Click Delete.
.
Delete a User-Account
1. Log in to the web UI on the relevant Rubrik cluster.
2. Click the gear icon on the top bar of the web UI.
3. From the Settings menu, select Manage Users.
4. Scroll the page or use the search field to locate a user.
5. Click the ellipsis icon next to the user account entry.
6. Select Delete.
7. Click Delete.
Create On Demand Snapshots
1.Log in to the web UI on the relevant Rubrik cluster.
2.On the left-side menu of the Rubrik web UI, select SLA Domains > Local Domains.
3.Select the name of an SLA Domain.
4.In the VMs section of the SLA Domain’s page, click the name of a virtual machine.
5.On the Local VM Details page, click Take On Demand Snapshot. The Rubrik cluster creates the on-demand snapshot.
How to Un-Install / Re-Install SQL Agent
To remove the Rubrik agent:
++++++++++++++++++++++++++++++
Use PowerShell (administrative mode)
powershell "& Get-WmiObject -Class Win32_Product -Filter \"Name='Rubrik Backup Service'\" | ForEach-Object { $_.Uninstall() }"
Remove the following directory and sub directories:
C:\ProgramData\Rubrik\Rubrik Backup Service\*
After removing the Rubrik SQL agent, please make sure the following registry key has been removed, and if not please remove it
'HKEY_LOCAL_MACHINE\SOFTWARE\Rubrik Inc.\Backup Service\Backup Agent ID'
When reinstalling the agent use the following steps:
+++++++++++++++++++++++++++++++++++++++++++++++++++++
Download the agent from the following link:
https://<cluster-address>/connector/RubrikBackupService.zip.
To install the agent please perform the following from powershell as Administrator:
Execute as Administrator:
msiexec /i C:\Rubrik_Windows\RubrikBackupService.msi /qn
Cisco UCS Box Upgarde Fails
Upgrade Issue on UCS Manager
++ UCSM and FI B is upgraded successfully but FI A is stuck at 65% with the following Message
UCS pre-upgrade check failed. Free space in the file system is below threshold
Note: None of UCS have stuck with this Message Unlinke N5K, N3K. This issue is similar to CSCun79792
Load the debug Plugin and Perform the following (CHeck How to Load Debug Plugin at the End og this Section)
Linux(debug)# cd /var/tmp
Upgrade Issue on UCS Manager:
+++++++++++++++++++++++++++++
Versions Affected: https://www.cisco.com/c/en/us/support/docs/field-notices/640/fn64094.html
++ UCSM and FI B is upgraded successfully but FI A is stuck at 65% with the following Message
UCS pre-upgrade check failed. Free space in the file system is below threshold
Note: None of UCS have stuck with this Message Unlinke N5K, N3K. This issue is similar to CSCun79792
Load the debug Plugin and Perform the following (CHeck How to Load Debug Plugin at the End og this Section)
Linux(debug)# cd /var/tmp
Linux(debug)# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 300M 255M 46M 85% /
none 2.0M 4.0K 2.0M 1% /post
none 300M 255M 46M 85% /var
none 3.0G 1.1G 2.0G 35% /isan
none 600M 180M 421M 30% /var/tmp <————
Note: Check How much Percentage is /var/tmp used? In this case, 30% is used.
Most of the time, it is smm.log and auto_file_deletion_log.txt that eats up the space.
Here is an example. After you have run “ df –h “ , issue the following command to see which files are eating the space (below output shows SMM.LOG is eating 1.3M of space)
Linux(debug)# ls -lSh
total 2.2M
-rw-rw-rw- 1 root root 1.3M Apr 22 14:20 smm.log <<<———
-rw-rw-rw- 1 root root 245K Apr 21 03:28 afm_srv_15.log.0
-rw-r–r– 1 root root 196K Apr 8 11:47 sam_bridge_boot.log
-rw-rw-rw- 1 root root 63K Apr 22 14:18 afm_srv_15.log
-rw-rw-rw- 1 root root 60K Apr 8 11:50 syslogd_errlog.4886_4886
-rw-rw-rw- 1 root root 21K Apr 8 11:47 fm_debug.log.vdc_1.4919
-rw-rw-rw- 1 root root 18K Apr 8 11:49 fcoe_mgr_init.log
-rw-rw-rw- 1 root root 5.2K Apr 8 11:49 first_setup.log
-rwxr-xr-x 1 root root 4.4K Apr 8 12:25 iptab.sh
Since the files are in /var/tmp they are assumed to be safe to be deleted. Rather than fully deleting the files, it is safer to echo 0 into the file to empty and reduce the file size, this will ensure any service that is dependent upon being able to write to this log file still can.
Linux(debug)# echo 0 > /var/tmp/smm.log
Linux(debug)# echo 0 > /var/tmp/auto_root_file_deletion_log.txt
After this, you can df -h again to confirm that /var/tmp is now less than 10% utilzation. The upgrade should now be able to proceed.
++
After this workaround, I waited for 10 mins and still the FI is stuck at 65% and I reboot the FI and waited to 10 more mins after it came up – No Luck. So, I went ahead and re-activated the FI firmware image to 2.2(6c) with force option and the upgrade went successful.
Loading Debug Plugin in UCS
=======================
First Get the debug Plugin from : https://cspg-releng.cisco.com/
Note: You will need to select the appropriate version for the Plugin.
Below is the example for 2.2(6c) version
SELECT and CLICK on 2.2.6 Elcap_MR5 UNDER Release Builds on the lefthand side (Dont select Commits)
NOW You will select FCSc and click on BUNDLES and then you will select DEBUG.
Note: You can upload this image to UCSM same as you upload Upgrade Images. Once you uplaod it to UCSM, login to CLI and then Perfomr the following steps
Load debug plugin
FI-A(local-mgmt)# copy debug_plugin/ucs-dplug.5.0.3.N2.2.26c.gbin x
FI-A(local-mgmt)# load x
###############################################################
Warning: debug-plugin is for engineering internal use only!
For security reason, plugin image has been deleted.
###############################################################
Successfully loaded debug-plugin!!!
Detail List of Commands:
+++++++++++++++++++++
Connect Local-mgmt
FI-A(local-mgmt)# copy debug_plugin/ucs-dplug.5.0.3.N2.2.26c.gbin x
FI-A(local-mgmt)# load x
Linux(debug)# cd /var/tmp
Linux(debug)# df -h
Linux(debug)# ls -lSh
Linux(debug)# echo 0 > /var/tmp/smm.log
Linux(debug)# echo 0 > /var/tmp/auto_root_file_deletion_log.txt
CHECK THE HARDWARE VERSION :
+++++++++++++++++++++++++++++++
5596# show sprom sup | inc H/W
If hardware version in 1.1:
Copy dplugin and update file, Load Plugin
Copy the ucd-update.tar to the bootflash
Copy the debug plugin to the bootflash
Load the image via the load command
>>You might have to call support for the update.tar script
Linux(debug)# cp /bootflash/ucd-update.tar /tmp
Linux(debug)# cd /tmp/
Linux(debug)# tar -xvf ucd-update.tar
ucd.bin
Linux(debug)#
Step 2
———————————-
Run the ucd.bin file:
Linux(debug)# ./ucd.bin
You will see the prompt about updated version 1.1.
SAFESHUT:
++++++++++
>>Have to contact support for the safeshut.tar script
Copy tftp ://10.96.60.154/safeshut.tar workspace://safeshut.tar
Untar safeshut.tar (tar -xvf safeshut.tar)
Linux(debug)# ./safeshut.sh
md5sum for ./klm_sup_ctrl_mc.klm does not match – corrupted tarball?
Linux(debug)# ./safeshut.sh
md5sum for ./rebootsys.bin does not match – corrupted tarball?
Cd bootflash/ ————————————>Checks if the safeshut is updated ,look for the change in time of reboot it should be the current time
Cd /sbin
ls -la |grep “reboot”LS
Cd/bootflash
Ls -la |grep -I “klm_sup_ctrl_mc.klm”
Repeat for the peer FI
OPTIONAL:
++++++++++
echo 7 7 7 7 > /proc/sys/kernel/printk
Linux(debug)# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 300M 255M 46M 85% /
none 2.0M 4.0K 2.0M 1% /post
none 300M 255M 46M 85% /var
none 3.0G 1.1G 2.0G 35% /isan
none 600M 180M 421M 30% /var/tmp <————
Note: Check How much Percentage is /var/tmp used? In this case, 30% is used.
Most of the time, it is smm.log and auto_file_deletion_log.txt that eats up the space.
Here is an example. After you have run “ df –h “ , issue the following command to see which files are eating the space (below output shows SMM.LOG is eating 1.3M of space)
Linux(debug)# ls -lSh
total 2.2M
-rw-rw-rw- 1 root root 1.3M Apr 22 14:20 smm.log <<<———
-rw-rw-rw- 1 root root 245K Apr 21 03:28 afm_srv_15.log.0
-rw-r–r– 1 root root 196K Apr 8 11:47 sam_bridge_boot.log
-rw-rw-rw- 1 root root 63K Apr 22 14:18 afm_srv_15.log
-rw-rw-rw- 1 root root 60K Apr 8 11:50 syslogd_errlog.4886_4886
-rw-rw-rw- 1 root root 21K Apr 8 11:47 fm_debug.log.vdc_1.4919
-rw-rw-rw- 1 root root 18K Apr 8 11:49 fcoe_mgr_init.log
-rw-rw-rw- 1 root root 5.2K Apr 8 11:49 first_setup.log
-rwxr-xr-x 1 root root 4.4K Apr 8 12:25 iptab.sh
Since the files are in /var/tmp they are assumed to be safe to be deleted. Rather than fully deleting the files, it is safer to echo 0 into the file to empty and reduce the file size, this will ensure any service that is dependent upon being able to write to this log file still can.
Linux(debug)# echo 0 > /var/tmp/smm.log
Linux(debug)# echo 0 > /var/tmp/auto_root_file_deletion_log.txt
After this, you can df -h again to confirm that /var/tmp is now less than 10% utilzation. The upgrade should now be able to proceed.
++ After this workaround, I waited for 10 mins and still the FI is stuck at 65% and I reboot the FI and waited to 10 more mins after it came up – No Luck. So, I went ahead and re-activated the FI firmware image to 2.2(6c) with force option and the upgrade went successful.
Loading Debug Plugin in UCS
=======================
First Get the debug Plugin from : https://cspg-releng.cisco.com/
Note: You will need to select the appropriate version for the Plugin.
Below is the example for 2.2(6c) version
SELECT and CLICK on 2.2.6 Elcap_MR5 UNDER Release Builds on the lefthand side (Dont select Commits)
NOW You will select FCSc and click on BUNDLES and then you will select DEBUG.
Note: You can upload this image to UCSM same as you upload Upgrade Images. Once you uplaod it to UCSM, login to CLI and then Perfomr the following steps
Load debug plugin
FI-A(local-mgmt)# copy debug_plugin/ucs-dplug.5.0.3.N2.2.26c.gbin x
FI-A(local-mgmt)# load x
###############################################################
Warning: debug-plugin is for engineering internal use only!
For security reason, plugin image has been deleted.
###############################################################
Successfully loaded debug-plugin!!!
Understanding VVols
Virtual Volumes (VVols) is a new integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. Virtual Volumes simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed. It simplifies the delivery of storage service levels to individual applications by providing finer control of hardware resources and native array-based data services that can be instantiated with virtual machine granularity.
With Virtual Volumes (VVols), VMware offers a new paradigm in which an individual virtual machine and its disks, rather than a LUN, becomes a unit of storage management for a storage system.Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.
Overview:
++++++++++++++
Virtual Volumes (VVols) are VMDK granular storage entities exported by storage arrays. Virtual volumes are exported to the ESXi host through a small set of protocol end-points (PE). Protocol Endpoints are part of the physical storage fabric, and they establish a data path from virtual machines to their respective virtual volumes on demand. Storage systems enables data services on virtual volumes. The results of these data services are newer virtual volumes. Data services, configuration and management of virtual volume systems is exclusively done out-of-band with respect to the data path. Virtual volumes can be grouped into logicaly and are called storage containers (SC) for management purposes.
Virtual volumes (VVols) and Storage Containers (SC) form the virtual storage fabric. Protocol Endpoints (PE) are part of the physical storage fabric.
By using a special set of APIs called vSphere APIs for Storage Awareness (VASA), the storage system becomes aware of the virtual volumes and their associations with the relevant virtual machines. Through VASA, vSphere and the underlying storage system establishes a two-way out-of-band communication to perform data services and offload certain virtual machine operations to the storage system. For example, operations such as snapshots and clones can be offloaded.
For in-band communication with Virtual Volumes storage systems, vSphere continues to use standard SCSI and NFS protocols. This results in support with Virtual Volumes for any type of storage that includes iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE), and NFS.
-
- Virtual Volumes represent virtual disks of a virtual machine as abstract objects identified by 128-bit GUID, managed entirely by Storage hardware.
- Model changes from managing space inside datastores to managing abstract storage objects handled by storage arrays.
- Storage hardware gains complete control over virtual disk content, layout and management.
WORKFLOW:

Important Things to be noted:
+++++++++++++++++++++++
- Storage provider (SP): The storage provider acts as the interface between the hypervisor and the external array. It is implemented out-of-band (it is not data path) and uses the existing VASA (vSphere APIs for Storage Awareness) API protocol. The storage provider also consists of information, such as details on VVOLs and storage containers. VVOLs requires the support of VASA 2.0, released with vSphere 6.
Storage container (SC): This is configured on the external storage appliance. The specific implementation of the storage container will vary between storage Vendors, although most of the vendors allow physical storage to be aggregated into pools from which logical volumes can be created.
- Protocol endpoint (PE): It acts as a middle man between VVOLs and hypervisor and is implemented as a traditional LUN on block-based systems, although it stores no actual data (dummy LUN). The protocol endpoint has also been described as an I/O de-multiplexer, because it is a pass-through mechanism that allows access to VVOLs bound to it. For example the gate keeper LUN in EMC VMAX array.ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the Protocol Endpoint (PE), to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
- VVols Objects:
+++++++++++++
A virtual datastore represents a storage container in vCenter Server and the vSphere Web Client. Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives.
-
-
- Virtual machine objects stored natively on the array storage containers.
- There are five different types of recognized Virtual Volumes:
-
- Config-VVol – Metadata
- Data-VVol – VMDKs
- Mem-VVol – Snapshots
- Swap-VVol – Swap files
- Other-VVol – Vendor solution specific

Follow these guidelines when using Virtual Volumes:
- Because the Virtual Volumes environment requires the vCenter Server, you cannot use Virtual Volumes with a standalone ESXi host.
- Virtual Volumes does not support Raw Device Mappings (RDMs).
- A Virtual Volumes storage container cannot span across different physical arrays.
- Host profiles that contain virtual datastores are vCenter Server specific. After you extract this type of host profile, you can attach it only to hosts and clusters managed by the same vCenter Server as the reference host.
Key benefits of Virtual Volumes:
++++++++++++++++++++++++++
- Operational transformation with Virtual Volumes when data services are enabled at the application level
- Improved storage utilization with granular level provisioning
- Common management using Policy Based Management
Basic Commands for VSAN
VSAN is one of the best Products available from VMware. vSAN is a core building block for the Software-Defined Data Center.
Let us understand the different terminologies used in VSAN :
CMMDS – Cluster Monitoring, Membership, and Directory Service
CLOMD – Cluster Level Object Manager Daemon
OSFSD – Object Storage File System Daemon
CLOM – Cluster Level Object Manager
OSFS – Object Storage File System
UUID – Universally unique identifier
VSANVP – Virtual SAN Vendor Provider
SPBM – Storage Policy-Based Management
VSA – Virtual Storage Appliance
MD – Magnetic disk
SSD – Solid-State Drive
RVC – Ruby vSphere Console
RDT – Reliable Datagram Transport
Let us get into details for each one of them :
1: CMMDS
ESXi Shell, there is a VSAN utility called cmmds-tool which stands for Clustering Monitoring, Membership and Directory Services. This tool allows you to perform a variety of operations and queries against the VSAN nodes and their associated objects.
Few examples of cmmds command:
cmmds-tool find -u uuid -f json |less
Find command Example
cmmds-tool find –t HOSTNAME
cmmds-tool find -t DISK | grep “DISK” | wc –l
![]()
cmmds-tool amimember

cmmds-tool whoami
![]()
cmmds-tool find -t DISK |grep “DISK”
cmmds-tool find |grep name
2:CLOMD :
I will update few of the commands which will help you to determine the basic configuration of VSAN through Esxi host.
1: Tag a LUN as SSD
esxcli vsan storage list|grep -i Device|wc -l
df -h
esxcli vsan storage list|grep -i Device
esxcfg-scsidevs -a
esxcli storage nmp device list
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013188
2: Esxcli commands for VSAN :
esxcli vsan datastore name get >>>VSAN datastore Name
command ![]()
esxcli vsan network list >>> Network configuration of VSAN

esxcli vsan cluster get >>> Cluster information of VSAN

esxcli vsan policy getdefault >>>VSAN storage policy
