Change the EVC Mode for a Cluster

Verify :

Verify that all hosts in the cluster have supported CPUs for the EVC mode you want to enable. See http://kb.vmware.com/kb/1003212 for a list of supported CPUs.
Verify that all hosts in the cluster are connected and registered on vCenter Server. The cluster cannot contain a disconnected host.
Virtual machines must be in the following power states, depending on whether you raise or lower the EVC mode.

EVC Mode
Virtual Machine Power Action
Raise the EVC mode to a CPU baseline with more features.
Running virtual machines can remain powered on. New EVC mode features are not available to the virtual machines until they are powered off and powered back on again. A full power cycling is required. Rebooting the guest operating system or suspending and resuming the virtual machine is not sufficient.
Lower the EVC mode to a CPU baseline with fewer features.
Power off virtual machines if they are powered on and running at a higher EVC Mode than the one you intend to enable.
Procedure:

1.Select a cluster in the inventory.
2.Click the Manage tab and click Settings.
3.Select VMware EVC and click Edit.
4.Select whether to enable or disable EVC.
Option
Description
Disable EVC
The EVC feature is disabled. CPU compatibility is not enforced for the hosts in this cluster.
Enable EVC for AMD Hosts
The EVC feature is enabled for AMD hosts.
Enable EVC for Intel Hosts
The EVC feature is enabled for Intel hosts.

1.From the VMware EVC Mode drop-down menu, select the baseline CPU feature set that you want to enable for the cluster.
2.If you cannot select the EVC Mode, the Compatibility pane displays the reason, and the relevant hosts for each reason.
Click OK.

About this task:

Several EVC approaches are available to ensure CPU compatibility:

1.If all the hosts in a cluster are compatible with a newer EVC mode, you can change the EVC mode of an existing EVC cluster.
2.You can enable EVC for a cluster that does not have EVC enabled.
3.You can raise the EVC mode to expose more CPU features.
4.You can lower the EVC mode to hide CPU features and increase compatibility.

Limits on Simultaneous Migrations

vCenter Server places limits on the number of simultaneous virtual machine migration and provisioning operations that can occur on each host, network, and datastore. Each operation, such as a migration with vMotion or cloning a virtual machine, is assigned a resource cost. Each host, datastore, or network resource, has a maximum cost that it can support at any one time. Any new migration or provisioning operation that causes a resource to exceed its maximum cost does not proceed immediately, but is queued until other operations complete and release resources. Each of the network, datastore, and host limits must be satisfied for the operation to proceed.

Network Limits:

Network limits apply only to migrations with vMotion. Network limits depend on the version of ESXi and the network type. All migrations with vMotion have a network resource cost of 1.

Network Limits for Migration with vMotion
Operation
ESXi Version
Network Type
Maximum Cost
vMotion
5.0, 5.1, 5.5, 6.0
1GigE
4
vMotion
5.0, 5.1, 5.5, 6.0
10GigE
8

Datastore Limits:

Datastore limits apply to migrations with vMotion and with Storage vMotion. A migration with vMotion has a resource cost of 1 against the shared virtual machine’s datastore. A migration with Storage vMotion has a resource cost of 1 against the source datastore and 1 against the destination datastore.

Datastore Limits and Resource Costs for vMotion and Storage vMotion
Operation ESXi Version Maximum Cost Per Datastore Datastore Resource Cost
vMotion
5.0, 5.1, 5.5, 6.0
128
1
Storage vMotion
5.0, 5.1, 5.5, 6.0
128
16

Host Limits: >>

Host limits apply to migrations with vMotion, Storage vMotion, and other provisioning operations such as cloning, deployment, and cold migration. >> All hosts have a maximum cost per host of 8. For example, on an ESXi 5.0 host, you can perform 2 Storage vMotion operations, or 1 Storage vMotion and 4 vMotion operations.

Host Migration Limits and Resource Costs for vMotion, Storage vMotion, and Provisioning Operations
Operation
ESXi Version
Derived Limit Per Host
Host Resource Cost
vMotion
5.0, 5.1, 5.5, 6.0
8
1
Storage vMotion
5.0, 5.1, 5.5, 6.0
2
4
vMotion Without Shared Storage
5.1, 5.5, 6.0
2
4
Other provisioning operations
5.0, 5.1, 5.5, 6.0
8
1

 

Virtualized I/O with Cisco Virtual Interface Cards

Cisco VICs are PCIe-compliant interfaces that support up to 256 PCIe devices with
dynamically configured type (NIC or HBA), identity (MAC address or worldwide
name [WWN]), fabric failover policy, bandwidth, and QoS policy settings. With
Cisco VICs, server configuration—including I/O configuration—becomes configurable
on demand, making servers stateless resources that can be deployed to meet
any workload need at any time, without any physical reconfiguration or recabling
required. Cisco VICs support up to 80 Gbps of connectivity and are available in
multiple form factors:

• mLOM: These Cisco VICs can be ordered preinstalled in Cisco UCS M3 and
M4 blade servers, occupying a dedicated slot for the device . If more
than 40 Gbps of bandwidth is needed, a port expander card can be installed in
the server’s mezzanine slot to give the card access to an additional 40 Gbps of
bandwidth. When Cisco UCS 2304XP Fabric Extenders are installed in the blade
chassis, the Cisco UCS VIC 1340 detects the availability of 40 Gigabit Ethernet
and disables the port channel for greater efficiency.

• Mezzanine: Standard Cisco VICs can be installed in any blade server’s mezzanine
slot: one for half-width blade servers, and up to two for full-width blade servers.
Each Cisco VIC supports up to 80 Gbps, for a total of up to 320 Gbps of
aggregate bandwidth for double-width, double-height servers such as the Cisco
UCS C460 M4 Rack Server.

• PCIe: PCIe form-factor cards can be installed in Cisco rack servers. Cisco VICs
are required when you integrate these servers into Cisco UCS because they
have the circuitry to pass the unified fabric’s management traffic to the server’s
management network, enabling single-wire, unified management of rack servers.

Virtualized I/O with Converged Network Adapters

Virtual links can originate from converged network adapters that typically host a
dual 10 Gigabit Ethernet NIC and a dual HBA from either Emulex or Q-Logic, along
with circuitry to multiplex the four streams of traffic onto two 10-Gbps unified fabric
links. Cisco innovations first brought this concept to market, with the first generation
of converged network adapters (CNAs) supported by Cisco silicon that multiplexed
multiple traffic flows onto the unified fabric.

With servers connected to Cisco UCS through CNAs, the traffic from each of the
interface’s four devices is passed over four virtual links that terminate at virtual ports
within the fabric interconnects.

Virtualized I/O with Converged Network Adapters

The unified fabric virtualizes I/O so that rather than requiring each server to be
equipped with a set of physical I/O interfaces to separate network functions, all I/O
in the system is carried over a single set of cables and sent to separate physical
networks at the system’s fabric interconnects as necessary. For example, storage
traffic destined for Fibre Channel storage systems is carried in the system using
FCoE. At the fabric interconnects, storage-access traffic can transition to physical
Fibre Channel networks through a Fibre Channel transceiver installed in one or more
of the fabric interconnect’s unified ports.

I/O is further virtualized through the use of separate virtual network links for
each class and each flow of traffic. For example, management, storage-access,
and IP network traffic emanating from a server is carried to the system’s fabric
interconnects with the same level of secure isolation as if it were carried over
separate physical cables. These virtual network links originate within the server’s
converged network adapters and terminate at virtual ports within the system’s fabric
interconnects.

These virtual links are managed exactly as if they were physical networks. The
only characteristic that distinguishes physical from virtual networks within the
fabric interconnects is the naming of the ports. This approach has a multitude of
benefits: changing the way that servers are configured makes servers flexible,
adaptable resources that can be configured through software to meet any workload
requirement at any time. Servers are no longer tied to a specific function for
their lifetime because of their physical configuration. Physical configurations are
adaptable through software settings. The concept of virtual network links brings
immense power and flexibility to support almost any workload requirement through
flexible network configurations that bring complete visibility and control for both
physical servers and virtual machines.

Cisco UCS Architecture

Unified Computing System Manager:



  • Embedded device manager for family of UCS components
  • Enables stateless computing via Service Profiles
  • Efficient scale: Same effort for 1 to N blades
  • APIs for integration with new and existing data center infrastructure
Management Protocols:

+++++++++++++++++++





UCS 6248UP Fabric Interconnect:

+++++++++++++++++++++++++++++
  • High Density 48 ports in 1RU
  • 1Tbps Switching capability
  • All ports can be used as uplinks or downlinks
  • All ports can be configured to support either 1Gb or 10Gb speeds
  • Unified Ports
  • 1 Expansion slots
  • 2us Latency
  • 80 PLUS Gold PSUs
  • Backward and forward Compatibility




UCS 6296UP Fabric Interconnect:

++++++++++++++++++++++++++
  • High Density 96 ports in 2RU
  • 2Tbps Switching capability
  • All ports can be used as uplinks or downlinks
  • All ports can be configured to support either 1Gb or 10Gb speeds
  • Unified Ports
  • 4 Expansion slots
  • 2us Latency
  • 80 PLUS Gold PSUs
  • Backward and forward Compatibility


UCS 6200 Expansion Module:

+++++++++++++++++++++++++
  • 16 “Unified Ports”
  • Ports can be configured as either Ethernet or Native FC Ports
  • Ethernet operations at 1/10 Gigabit Ethernet
  • Fibre Channel operations at 8/4/2/1G
  • Uses existing Ethernet SFP+ and Cisco 8/4/2G and 4/2/1G FC Optics




UCS 2204XP I/O Module:

+++++++++++++++++++++
  • Increased uplink bandwidth
  • 4 x 10 Gig network-facing ports
  • Double the server-facing bandwidth
  • 16 x 10 Gig = 4 per half width slot
  • Two I/O Modules per chassis
  • 40Gbps to a single half-width blade (20Gbps left and right)
  • 80Gbps to a full-width blade
  • Built in chassis management
  • Fully managed by UCSM




UCS 2208XP I/O Module:

+++++++++++++++++++++
  • Double the uplink bandwidth
  • 8 x 10 Gig network-facing ports
  • Quadruple the server-facing bandwidth
  • 32 x 10 Gig = 4 per half width slot
  • Two I/O Modules per chassis
  • 80Gbps to a single half-width blade (40Gbps left and right)
  • 160Gbps to a full-width blade
  • Built in Chassis Management
  • Fully Managed by UCSM




UCS 5108 Blade Chassis:
+++++++++++++++++++++

Chassis
  • Up to 8 half slot blades
  • Up to 4 full slot blades
  • 4x power supplies, N+N grid redundant
  • 8x fans included
  • 2x UCS 2104 Fabric Extender
  • All items hot-pluggable

UCS Rack Servers: +++++++++++++++

C240 M3 Rack Server : 2 Socket2 Socket Intel E5-260024 DIMM slots Maximum memory speed 1600MHz24 or 12 Internal HDDs SFF and 3.5” – SAS, SATA and SSD options Battery Backed cache option650W and 1200W PSUs – Platinum Rated5 PCIe slots – GPU readyHeight 2RUIntegrated CIMC and KVM Rack Mount RAID controllers: Adapters for B-Series M72KR-Q: I/O Adapters for B-Series M72KR-E: UCS 1280 VIC: References :

 

UCS components

The basic Cisco components of the UCS are:

UCS manager: Cisco UCS Manager implements policy-based management of the server and network resources. Network, storage, and server administrators all create service profiles, allowing the manager to configure the servers, adapters, and fabric extenders and appropriate isolation, quality of service (QoS), and uplink connectivity. It also provides APIs for integration with existing data center systems management tools. An XML interface allows the system to be monitored or configured by upper-level systems management tools.

UCS fabric interconnect: Networking and management for attached blades and chassis with 10 GigE and FCoE. All attached blades are part of a single management domain. Deployed in redundant pairs, the 20-port and the 40-port offer centralized management with Cisco UCS Manager software and virtual machine optimized services with the support for VN-Link.

Cisco Fabric Manager: manages storage networking across all Cisco SAN and unified fabrics with control of FC and FCoE. Offers unified discovery of all Cisco Data Center 3.0 devices as well as task automation and reporting. Enables IT to optimize for the quality-of-service (QoS) levels, performance monitoring, federated reporting, troubleshooting tools, discovery and configuration automation.

Fabric extenders: connect the fabric to the blade server enclosure, with 10 Gigabit Ethernet connections and simplifying diagnostics, cabling, and management. The fabric extender is similar to a distributed line card and also manages the chassis environment (the power supply, fans and blades) so separate chassis management modules are not required. Each UCS chassis can support up to two fabric extenders for redundancy.

SAN Booting to Allow Server Mobility

Booting over a network (LAN or SAN) is a mature technology and an important step in moving toward stateless computing, which eliminates the static binding between a physical server and the OS and applications it is supposed to run.

The OS and applications are decoupled from the physical hardware and reside on the network.

The mapping between the physical server and the OS on the network is performed on demand when the server is deployed. Some of the benefits of booting from a network are:

• Reduced server footprint because fewer components (no disk) and resources are needed

• Simplified disaster and server failure recovery

• Higher availability because of the absence of failure-prone local hard drives

• Centralized image management

• Rapid redeployment With SAN booting, the image resides on the SAN, and the server communicates with the SAN through an HBA . The HBA’s BIOS contains the instructions that enable the server to find the boot disk. A common practice is to have the boot disk exposed to the server as LUN ID 0.

 

The Cisco UCS M71KR-E Emulex CNA, Cisco UCS M71KR-Q QLogic CNA, and Cisco UCS M81KR Virtual Interface Card (VIC) are all capable of booting from a SAN.

Management of Virtual Servers in the SAN Typically, virtual servers do not have an identity in the SAN: they do not log in to the SAN like physical servers do. However, if controlling and monitoring of the virtual servers is required, N-port ID virtualization (NPIV) can be used.

This approach requires you to:

• Have a Fibre Channel adapter and SAN switch that support NPIV

• Enable NPIV on the virtual infrastructure, such as by using VMware ESX Raw Device Mode (RDM)

• Assign virtual port worldwide names (pWWNs) to the virtual servers

• Provision the SAN switches and storage to allow access By zoning the virtual pWWNs in the SAN to permit access, you can control virtual server SAN access just as with physical servers. In addition, you can monitor virtual servers and provide service levels just as with any physical server.

Edit a Fileset Template

Edit a fileset template to change the set of data that the template is defined .

NOTE: ++++++++

>The Rubrik cluster applies the changes made to a fileset template to all filesets that are derived from the template.

>>Fileset changes only apply to new backups not to existing backups.

STEPS:

++++++

1. Log in to the Rubrik web UI.

2. In the left-side menu, click Protection > Linux Hosts.

3. Click Templates.

4. Select a template.

5. Open the ellipsis menu and select Edit.

6. Make changes to the field values.

7. Click Update.